Currently, two of the most popular chatbots are ChatGPT and Claude. Most people have heard of ChatGPT, fewer Claude. Until recently, Claude was available for free in a limited way, allowing only a certain number of queries per hour. Then Claude came out with Claude Pro, offering 5x the amount of data queries. They priced it the same as ChatGPT: $20/month.
I have subscriptions to both ChatGPT Plus and Claude Pro and have been comparing them. Both provide similarly intelligent answers to most questions I ask. The chatbots mostly differ in the input source limits. Claude has a 100k input source limit, while ChatGPT Plus’s input source is around 4k tokens. (100k tokens is about 75k words. 4k tokens is about 3k words.)
This input source also relates to the context window of the chat session. This means you can upload a long article and follow it up with a few questions (for example, a 90k token-length article followed by 10k token-length chat), or you can upload a short article and follow it up with a lot of questions (for example, 10k token-length article followed by 90k token-length chat).
For example, after I upload several chapters from my API documentation course into Claude, I can only ask a few questions in the chat session before I have to start a new chat session. And with a new chat session, the context of the uploads is lost. But even having this upload limit size is phenomenal compared to ChatGPT, whose token limit is around 4k.
Anyway, here are some additional differences between Claude and ChatGPT:
I’m sure there are a lot more nuances across different knowledge domains. For example, is one better at math than another? Coding questions? Specialized knowledge? I’m not sure. That kind of testing is beyond the scope of my post.
Before jumping into more comparison tests between Claude and ChatGPT, let me detour a bit. My value as an informal tech commentator is in relating my own experiences with AI, so bear with me.
Despite my enthusiasm for AI, my wife remains uninterested, skeptical, and dismissive of AI chatbots. I keep waiting for her to have that magic moment where she gets it, but so far, it hasn’t come. My hypothesis for her apathy is that LLMs are likely only appealing to creator types — content creators, coders, builders, etc. In many other scenarios, AIs don’t have a compelling use case.
But this past week I had opportunities to use Claude and ChatGPT a number of times in a non-creator scenarios. This week I was on a cruise to Alaska, spending time traveling along the western coast of Canada and stopping in Juneau, Skagway, and Ketchikan. During the cruise, we listened to naturalist lectures and tour guide explanations and stories. With each lecture and excursion, I had questions I wanted to ask, like most people. Sometimes I asked the guide, but I also started asking Claude and ChatGPT.
For example, during the ship naturalist’s lectures on bears, salmon, and trees, I asked Claude the following:
You can see my Claude chat about salmon, bears, and trees here.
As I started to ask a few initial questions, the answers to those questions gave rise to more. My curiosity grew and grew. I soon became full of questions.
During the naturalist lectures, the speaker didn’t take questions (there were 200 people listening). But even if he did, can you imagine someone asking 20 annoying questions like this? Not only would it derail the lecture’s focus, some of my questions are embarrassing. Do I really want to ask questions about how to win at hand-to-claw combat against a bear? Or admit that I don’t really care about seeing whales, or that I don’t know what “salmon spawning” means or whether all salmon originate from Alaska? However, we can ask the dumb or embarrassing questions to AI!
Also, AI tools are good at answering general knowledge questions like this. You could probably find most of these answers on Wikipedia, but only after a long and tedious search. For example, try browsing the results to my question “Are any other fish besides salmon anadromous?” on Wikipedia. The answers are scattered in fragments across various pages. It might be harder to find answers to questions about knife tactics in Grizzly fights, except perhaps on Reddit threads.
In contrast, LLMs provide quick, immediate answers directly addressing my question. Could the answers be riddled with errors? Sure. And if you’ve asked a question that you don’t know the answer to, you’re in a position of vulnerability. However, I’ll gladly take quick answers that are 90% accurate over 99% accurate answers that take 10x as long to find.
I’ll often pose the same question to both Claude and ChatGPT as a validation check on each other. I assume the common ground of their responses is less likely to be a hallucination. It’s like getting a second opinion from another doctor. When multiple people tell you the same thing, the chances of it being true increase.
During the cruise excursions, I found myself thinking, I have a device in my pocket that can provide answers to practically any questions I have. As such, I started using Claude and ChatGPT more and more on the trip, in almost every place I visited.
On a naturalist tour through a temperate rainforest area, I did ask our tour guide (rather than the AI chatbots) a couple of questions, like what are these plants that have long leaves (skunk cabbage, it turns out), what kind of fish are in the lake (rainbow fish), why are the trees around us withering (due to tannic acid), and how do they calculate the age of trees without cutting them down to count the rings (they take core samples).
These questions were warmups for the larger questions that started to surface in my mind. Questions like, what did this forest look like 300 million years ago? When did trees originally form? If the life of a tree can be 900+ years, do trees have a slower pace of evolution? If trees evolve so slowly, and evolution is such a key trait towards dominance, why are trees still so abundant? If you could trace back the evolution of trees to the first tree, which kind would it be? Did Spruce trees evolve from Western Hemlock, or vice versa? What was the first tree? Which organisms evolve the fastest and slowest? Did tree life need to evolve first in order to support animal life?
Since our guide wasn’t a full-fledged biologist, I didn’t ask him these questions. Plus, the answers I was getting from Claude (when service was available) were pretty good. At this point, my questions would have only tested his knowledge. Part of me wanted to ask the questions to the guide because I was proud of how clever they were, especially my question about the first tree, or what the forest looked like 300 million years ago. But looking clever didn’t seem purposeful, and there wasn’t time for endless Q&A on this short rainforest jaunt anyway.
These scenarios with LLMs were pretty cool. I started to get confident that I could find answers to nearly every question I had. I felt incredibly capable. What do I do with this infinite, immediate information about everything and anything?
If nothing else, asking questions to AI tools begat more questions, and that endless stream of questions made me more curious, observant, and attentive. A semi-quiet, beautiful forest was no longer just a beautiful temperate forest setting. It became a space for critical inquiry, for questions to answer. Each answer led to more questions and thoughts. Through these tools, my curiosity expanded, my attention focused and stayed present, and I was never bored.
Coming back to an earlier theme: are LLMs just good for creators? In these tourist scenarios, obviously not. They are useful in unfamiliar settings, as long as you have a curious attitude. Ask one question and you’ll find yourself asking half a dozen more. Will these tools catch on and become tools used by the mainstream? I’m not sure. The default search tools will morph more and more into LLMs. Maybe some day, we’ll use AI chat as the primary interface to the world’s information.
How will we use this information to not only answer basic questions like mine, but to move towards new questions that don’t have answers? Will these tools fuel and expand our curiosity? How will we synthesize the answers into new information that’s outside the LLM’s predictive reach?
Let’s get back to comparing Claude versus ChatGPT. It just so happens that, during one of the ship excursions, we were in an excursion group (ziplines!) with the ship’s navigator. The navigator invited us on a special tour of the ship’s bridge, where we could see the captain, officers, and other crew using the navigation controls. To prepare for this special bridge tour, I started thinking up questions to ask:
I did leverage the AI chatbots to help come up with some of the questions.
I initially planned to ask my questions to the bridge crew. But then I decided to ask Claude and ChatGPT for the answers even before the tour. Here’s my initial prompt for the chat session:
Then I went through each of the above bullets, pasting them in separately rather than all at once because the answers tend to be more detailed that way. You can view the responses here:
Looking at the responses, ChatGPT’s answers are more detailed and thorough. Claude’s answers are decent but shorter and more succinct. During ChatGPT’s responses, the chat session timed out twice, requiring me to hit Regenerate for the response. Additionally, ChatGPT took significantly longer to type out the answers than Claude.
During the actual bridge session (see pic below), I asked only a couple of questions. Claude and ChatGPT’s responses didn’t really predict what the navigator explained on the bridge in detail. But it was still fun to do this.
In conclusion, Claude is the better choice when you have longer content to analyze, but ChatGPT might be better for shorter content. Both tools, however, can expand your curiosity and fill your mind with questions. This curiosity about the world around you can make you more attentive and present.
Note: This is a sponsored post.
Daan: We previously built API products in fintech and healthcare, and felt that tools on the market were focused on the technical side of things, whereas we wanted to help companies get commercial success with a great developer onboarding. We didn’t think that the world needs another docs platform, but rather a platform that helps to create an interactive onboarding experience for APIs, SDKs, and codebases, which helps to create and improve content and makes the content easy to understand (by making it much more visually appealing).
A large part of the success from companies like Stripe and Twilio is their focus on developer experience from the very beginning, and we wanted to productize this and offer it to the market, to make it accessible to all companies, even if they don’t have a dedicated developer experience (DX) team. So far our focus has been primarily on working with companies with OpenAPI specifications, but our latest feature is now helping companies that have SDK’s help explain how to use their products as well.
Daan: We are shipping big features monthly since the start of this year. Together with design partners we have shaped the product to the point where it is today, and we intend to open up for wider availability at the end of September.
Daan: We have already built a complete platform for displaying REST APIs and making guides based on said APIs. Companies were coming to us that have more than just REST APIs. They need to combine documentation for SDKs and REST APIs. We’ve released a new feature called Code-Walkthroughs, which allows for highlighting of code and creating step-by-step guides with multiple files as well.
We also automatically generate diagrams for REST API “chains.” To create REST API guides, you add endpoint blocks to a tutorial. You can configure endpoint blocks to only display the parameters you want to display (see the following screenshot).
To create a chain, add two (or more) endpoints in a tutorial together by indicating that a value in the response of one endpoint should be used in the request of the next endpoint. The diagram that we generate displays a data relationship which represents the chain you just made. Any updates you make to the chain will automatically reflect in the diagram.
In the diagram more information is made available as you click through the blocks. Both the endpoints and the diagram automatically update if a new version of the API spec is updated.
We’re releasing a new feature soon—Sequence diagrams. You will be able to create sequence diagrams by quickly clicking together the lifelines (vertical lines) and messages (these will represent API calls), and you get more flexibility compared to the generated diagrams.
So, one version of the diagrams is automatically generated when you create a tutorial with API chains. The other version is a sequence diagram that has more flexibility.
Daan: We indeed support automatically creating comparisons whenever you add a new version of your OpenAPI spec to our platform. Our AI functionality is a chat interface built on top of OpenAI’s API—we focused on helping developers get an answer to their question as quickly as possible. We use all the content that’s added to the platform, which can be written guides, code, APIs, and SDKs.
The feature is still experimental but response so far has been great. The user can ask any question about your content, and we’ll answer. All data submitted to OpenAI has been opted out of training their models.
Because of the nature of technical documentation and the need for it to be correct, we built it in a way that minimizes hallucinations (AI making up things). In the future we’ll be using the feedback derived from user behavior to create content suggestions. When several people ask similar questions, it likely needs to be clarified in your docs.
Using insights to improve the documentation is still something we’re working on (and are keen on getting feedback!). We’re also very aware of privacy concerns when capturing search queries done by developers and will introduce a compliant way for our users to investigate these queries in the future.
A funny side effect is that with the recent explosion of LLMs, the threshold to write code and build software and integrations is lowered significantly, and that makes it even more important to have a developer onboarding that’s understandable for both seasoned and citizen developers.
Daan: And the same goes for us—we were only made aware of the depth of our support for the OpenAPI spec when we encountered users that tried and tested various other solutions and that none of them had full support for some of the options in the spec. OpenAPI has built-in concepts to combine and reuse schema’s across the entire specification (Google “OpenAPI polymorphism” to learn more!). The most common occurrences are using anyOffs
, oneOffs
, and allOffs
.
From a tool vendor perspective this introduces complexity which we’ve decided to tackle head on. If you have used any of these concepts in your OpenAPI file you’ll see it properly rendered throughout the platform, down to code snippets updating based on what you have selected.
Another good example of a deeper level of support is our ability to display self-referencing schema’s, also known as circular references. When a schema references itself, it can break some of the popular open source OpenAPI tools.
There are still many OpenAPI concepts we have yet to tackle but see a ton of value in providing support for all of them as time goes on.
Some of our users have really large APIs with thousands of endpoints—a scale our platform easily supports. In terms of performance, we made a ton of improvements to offer a snappy experience.
Daan: We frequently work with companies that want to make it easier for people to collaborate internally on developer docs. We started working with a flexible WYSIWYG editor. Markdown can be typed in the UI and is immediately rendered. We are also offering engineers ways to automatically push new versions of their API spec to our platform via Git.
One of the most common points of feedback we’ve received was that writers want to manage all the content (not just the API) in a Git workflow. We’re happy to mention that before the end of the year we’ll support docs-as-code workflows on top of the browser-based editor.
Today we also support embedding our components into other solutions or platforms, which makes Alphadoc modular and available in combination with other content platforms. One of our users has already embedded a tutorial directly into their application to help developers immediately see how to use their API when they have created their API key.
Daan: At Alphadoc, when we talk about “storytelling,” we mean more than just documentation; we’re focused on creating an interactive and engaging narrative around your APIs and SDKs. It’s about guiding users through the functionalities and possibilities of your product in a coherent manner. Our platform allows you to configure each API endpoint, work with variables, and establish chains of endpoints.
This approach empowers companies in various industries, such as Ecommerce and Fintech, to effectively convey their product stories. Companies in the Ecommerce and Fintech industry, for example, have many use cases in their APIs where data (IDid’s) needs to be carried over between endpoints (carrying over data between API calls). They also often have endpoints that are very flexible, which makes it difficult for them to explain all possible scenarios on how to use them.
The tutorials and diagrams in Alphadoc allow you to completely configure each endpoint when you add it to a page, work with variables (think Postman), and chain up endpoints. When everything has been configured, we provide helpful code snippets for each API call and automatically generate interactive diagrams that display the relationship between all the steps. The developer who is integrating can try out API calls with live data.
To see this concept in action, explore our own documentation at docs.alphadoc.io, where you can witness how we transform technical documentation into an interactive storytelling experience.
Daan: To start building your docs & DX with Alphadoc, please fill out the sign-up form or visit our website at alphadoc.io. We’ll review your request for access within 1 business day. If granted access, you can upload your OpenAPI spec and easily create a compelling developer experience.
See Using AI for summaries to read the article.