Why MCP and Claude Skills Matter More Than Open Weights Alone
The
headlines about open-source AI keep circling back to model weights whether they
should be public, who controls them, and what that means for transparency.
That’s important, but it’s only part of the story. Real, broad-based innovation
comes from making systems legible,
composable, and easy to tinker with the same
properties that gave rise to Linux, Apache, and the web. MCP and Claude Skills
are interesting because they begin to offer that kind of participatory
architecture for AI.
The real traits of participatory
systems
Open-source ecosystems thrive when
small, interoperable pieces let people learn, modify, and share. The
architecture of participation has four central qualities:
- Legibility: You can understand what a component does without
reading the entire system.
- Modifiability: You can change one piece without rewriting everything.
- Composability: Components talk through simple, well-defined
interfaces.
- Shareability: A small contribution can be useful to others without
forcing them to adopt your whole stack.
Unix, HTML + “view source”, and the
early web didn’t win by exposing the lowest-level parts of a system. They won
by exposing usable parts – little tools, snippets, and interfaces that let
people learn by copying, remixing, and improving.
MCP and Skills: “view source” for
AI
Anthropic’s MCP (Model Capability
Protocol) servers and Claude Skills behave like those readable, remixable
components. An MCP server is a small service that lets an AI use a database,
call internal APIs, or interact with third-party services. A skill is an
atomic, human-readable bundle of instructions, a bit of code, and reference material
that teaches Claude a concrete task. As Matt Bell put it, a skill is “the
bundle of expertise to do a task and is typically a combination of
instructions, code, knowledge, and reference materials.”
What makes these powerful is
accessibility. If you can write a Python function or write a Markdown file, you
can create a skill or an MCP server. That’s the same dynamic that made the
early web explode: discoverable artefacts you can learn from and adapt.
Early signs of ecosystem behavior
- MCP registries and community lists (e.g., community
GitHub projects) already let people share servers for Postgres, Stripe,
and more.
- Skills for niche tasks (like spreadsheet analysis) are
being forked and adapted on GitHub.
- That modularity means dozens of apps can gain capabilities simply by reusing community-built components.
GPTs vs. Skills: apps vs. materials
OpenAI’s custom GPTs and Anthropic’s
Skills/MCPs represent different philosophies.
- GPTs equal packaged apps. Built through conversations and file uploads, GPTs are
shareable to use but opaque in structure. You can’t easily “view source”,
fork, or combine pieces across GPTs.
- Skills & MCPs equal open materials. Markdown, code repositories, and protocol-based servers – these
are readable, editable, and composable artefacts that exist outside a
single company’s shopfront.
An app store can be useful, but it
is a walled garden. The open-source revolution came from artefacts you could
inspect, modify, and combine. Skills and MCPs belong to that lineage; they’re
materials, not finished products.
The likely evolution and the risk
History suggests complexity grows,
but layers keep participation alive. The web evolved from static HTML to
complex frameworks, yet people still contribute at different layers (vanilla
HTML, Tailwind, Next.js). MCP and skills are simple today, almost “naive”, but
that will change. Expect to see:
- Abstraction layers and higher-level frameworks
- Composition patterns where skills call other skills
- Performance and optimization toolchains
- Improved security, isolation, and permission models
The crucial question: will those
layers preserve the architecture of participation, or will they lock innovation
behind expertise and resources?
I’m optimistic: Claude already helps
people author skills, which could keep contribution accessible even as systems
grow more complex.
Why open weights aren’t the whole
answer
Model weights matter; they’re the
foundation. But they’re like processor instructions. The more transformative
layer is the interface layer
where AI meets real use cases. MCP and Skills create composable, comprehensible
interfaces that ordinary developers (and increasingly non-developers) can use
to build solutions.
Programming itself is evolving. The
rise of plain-language prompts and Skills means people can describe problems
and assemble “fuzzy function calls” without writing low-level code. That makes
it possible for domain experts, not just professional developers, to build
reusable AI solutions to “scratch their own itch”, as Eric Allman emphasised
back in the open-source era.
READMORE: IBM and Anthropic Partner to Bring Claude AI into Enterprise Software
Fuzzy function calls: making “magic
words” inspectable
LLMs
already respond to high-level concepts and domain shorthand like “plan”, “TDD”, “study mode”, and “.md file”. Currently, many of those are baked into model
training or hidden tooling. Skills and MCPs make those patterns explicit and
editable.
Imagine
“study mode” published as a skill: you can read how it works, tweak the
interaction for medicine vs. language learning, combine it with your course
materials via an MCP, and share your variant. That turns opaque magic into
reusable building blocks, the foundation
for libraries of what I call “fuzzy functions”.
The economics of participation
mechanism design for AI
This
is where things get systemic. Open ecosystems aren’t just technically superior;
they’re better market designs. The web and open source lowered transaction
costs, improved discovery, and created reputation effects. AI needs the same
mechanism-design breakthroughs that made platforms generative rather than extractive.
Current
problems:
- Attribution is invisible
(models trained on public code rarely credit creators).
- Value capture concentrates with
a few platform owners.
- Improvement loops are closed;
sharing better prompts or skills is ad hoc.
- Quality signals are weak;
discoverability and reputation for skills remain primitive.
MCP
and skills are early infrastructure for a participatory AI market: discoverable
components that can carry attribution and become monetisable. But the
mechanisms for efficient matching, compensation, and reputation are still
missing.
Legal and licensing
questions we must solve
Open-source
norms don’t directly translate to this world. Much code on GitHub lacks
explicit licensing, which creates ambiguity. More importantly for AI:
- What licences govern data
flowing through MCP servers?
- Who owns derivative outputs
that combine proprietary data and model inference?
- How should we recognise and
compensate creators whose material trained or informs models?
There are initiatives like RSL and CC Signals, but nothing yet tailored to AI’s transformative uses. We’ll need new licensing ideas and technical protocols to track attribution and usage across networks of agents and skills.
Programming is moving up the stack
As
tools mature, developers will debug and design at higher levels: system
architecture, data integrity, and UX rather than low-level code generation.
Phillip Carter’s observation is apt: LLMs invert traditional tradeoffs; they’re
great at fuzziness but need help for precision. MCP and skills help provide
that precision by connecting LLMs to reliable tools and explicit instructions.
What the industry should do next: a
short playbook
- Make innovations inspectable. Publish effective interaction patterns and tool
configs as readable artefacts, not closed apps.
- Support open protocols. MCP’s cross-platform adoption shows what’s possible
when a standard solves real problems.
- Create contribution pathways at
all skill levels. From
single-prompt templates to full MCP servers, make all contributions
visible and valued.
- Document “magic”. Collect and version common prompting patterns, so
knowledge isn’t scattered across forums.
- Reinvent licensing. Design licences and protocols that cover both training-
and inference-era rights and recognition.
- Engage economists and
designers. Apply mechanism design to
build participatory markets that direct value to contributors, not only
platforms.
Open weights are necessary but not
sufficient. The long-term winner in AI won’t simply be whoever releases model
parameters. It will be whoever builds the best ways for people to participate in the tools, standards, registries, and markets that let everyday
developers and domain experts discover, reuse, improve, and monetise
components.
We’re at a crossroads. AI can become
a landscape of closed app stores and extractive platforms, or it can follow the
open-web lineage of legible, forkable components and flourishing ecosystems. I
know which future I prefer: one where ordinary people scratch their own itches,
share what they learn, and lift the whole industry with them.