Claude versus ChatGPT -- and a few thoughts on using AI chatbots on an Alaskan cruise
18 September 2023 | 7:00 am

In this post, I compare ChatGPT and Claude on an Alaskan cruise. Claude seems better at handling long content, and ChatGPT shorter content. Using both chatbots, I asked many questions to learn about my cruise surroundings. The chatbots expanded my curiosity and made me more attentive to my environment by encouraging endless questions.

Currently, two of the most popular chatbots are ChatGPT and Claude. Most people have heard of ChatGPT, fewer Claude. Until recently, Claude was available for free in a limited way, allowing only a certain number of queries per hour. Then Claude came out with Claude Pro, offering 5x the amount of data queries. They priced it the same as ChatGPT: $20/month.

I have subscriptions to both ChatGPT Plus and Claude Pro and have been comparing them. Both provide similarly intelligent answers to most questions I ask. The chatbots mostly differ in the input source limits. Claude has a 100k input source limit, while ChatGPT Plus’s input source is around 4k tokens. (100k tokens is about 75k words. 4k tokens is about 3k words.)

This input source also relates to the context window of the chat session. This means you can upload a long article and follow it up with a few questions (for example, a 90k token-length article followed by 10k token-length chat), or you can upload a short article and follow it up with a lot of questions (for example, 10k token-length article followed by 90k token-length chat).

For example, after I upload several chapters from my API documentation course into Claude, I can only ask a few questions in the chat session before I have to start a new chat session. And with a new chat session, the context of the uploads is lost. But even having this upload limit size is phenomenal compared to ChatGPT, whose token limit is around 4k.

Anyway, here are some additional differences between Claude and ChatGPT:

  • You can upload PDFs to Claude (max file size is 10 MB). With ChatGPT, you have to paste in the content.
  • Claude seems faster with its responses. The site loads quickly, and answers appear more rapidly. In contrast, ChatGPT sometimes hangs and requires me to hit Regenerate. At times ChatGPT’s text appears word-by-word at a plodding pace.
  • Claude has more safeguards (such as guarding against biases). In contrast, you can “jailbreak” ChatGPT more easily.

I’m sure there are a lot more nuances across different knowledge domains. For example, is one better at math than another? Coding questions? Specialized knowledge? I’m not sure. That kind of testing is beyond the scope of my post.

Other realizations about LLMs in general

Before jumping into more comparison tests between Claude and ChatGPT, let me detour a bit. My value as an informal tech commentator is in relating my own experiences with AI, so bear with me.

Despite my enthusiasm for AI, my wife remains uninterested, skeptical, and dismissive of AI chatbots. I keep waiting for her to have that magic moment where she gets it, but so far, it hasn’t come. My hypothesis for her apathy is that LLMs are likely only appealing to creator types — content creators, coders, builders, etc. In many other scenarios, AIs don’t have a compelling use case.

But this past week I had opportunities to use Claude and ChatGPT a number of times in a non-creator scenarios. This week I was on a cruise to Alaska, spending time traveling along the western coast of Canada and stopping in Juneau, Skagway, and Ketchikan. During the cruise, we listened to naturalist lectures and tour guide explanations and stories. With each lecture and excursion, I had questions I wanted to ask, like most people. Sometimes I asked the guide, but I also started asking Claude and ChatGPT.

For example, during the ship naturalist’s lectures on bears, salmon, and trees, I asked Claude the following:

  • Who is Robert Raincock, the naturalist? [This was the ship’s lecturer.]
  • What questions would specifically connect with Raincock’s research in a personal way?
  • I don’t seem to really care about whales or seeing them. Is something wrong with me?
  • What does it mean for salmon to be anadromous?
  • What does it mean for salmon to spawn?
  • When people catch salmon, are the salmon mostly in the spawning stage?
  • What does this saying refer to: if black fight back, if brown lay down
  • How could I engage with a grizzly in hand to claw combat and win? … What if I have a long knife? … Even if stabbed in the eye?
  • How many people in Alaska die from bear attacks yearly?
  • What do bears love that humans also love?
  • Do people eat bear meat?
  • Will climate change force polar bears south?
  • If polar bear meets a grizzly, will they fight?
  • Why is a fed bear a dead bear?
  • Are any other fish besides salmon anadromous?
  • How are AI tools applicable to bear or salmon study?
  • What are the key ideas in the book Salmon in the Trees?
  • How do salmon know when they’ve reached the point where they began life? … What are the biological theories about how they do this?
  • Why don’t more animals eat the dead salmon that have just finished spawning and are laying in the water?
  • Which is the most popular theory for how salmon navigate back home?
  • Do all salmon originate in Alaska?

You can see my Claude chat about salmon, bears, and trees here.

As I started to ask a few initial questions, the answers to those questions gave rise to more. My curiosity grew and grew. I soon became full of questions.

During the naturalist lectures, the speaker didn’t take questions (there were 200 people listening). But even if he did, can you imagine someone asking 20 annoying questions like this? Not only would it derail the lecture’s focus, some of my questions are embarrassing. Do I really want to ask questions about how to win at hand-to-claw combat against a bear? Or admit that I don’t really care about seeing whales, or that I don’t know what “salmon spawning” means or whether all salmon originate from Alaska? However, we can ask the dumb or embarrassing questions to AI!

Also, AI tools are good at answering general knowledge questions like this. You could probably find most of these answers on Wikipedia, but only after a long and tedious search. For example, try browsing the results to my question “Are any other fish besides salmon anadromous?” on Wikipedia. The answers are scattered in fragments across various pages. It might be harder to find answers to questions about knife tactics in Grizzly fights, except perhaps on Reddit threads.

In contrast, LLMs provide quick, immediate answers directly addressing my question. Could the answers be riddled with errors? Sure. And if you’ve asked a question that you don’t know the answer to, you’re in a position of vulnerability. However, I’ll gladly take quick answers that are 90% accurate over 99% accurate answers that take 10x as long to find.

I’ll often pose the same question to both Claude and ChatGPT as a validation check on each other. I assume the common ground of their responses is less likely to be a hallucination. It’s like getting a second opinion from another doctor. When multiple people tell you the same thing, the chances of it being true increase.

Do LLMs encourage curiosity?

During the cruise excursions, I found myself thinking, I have a device in my pocket that can provide answers to practically any questions I have. As such, I started using Claude and ChatGPT more and more on the trip, in almost every place I visited.

On a naturalist tour through a temperate rainforest area, I did ask our tour guide (rather than the AI chatbots) a couple of questions, like what are these plants that have long leaves (skunk cabbage, it turns out), what kind of fish are in the lake (rainbow fish), why are the trees around us withering (due to tannic acid), and how do they calculate the age of trees without cutting them down to count the rings (they take core samples).

These questions were warmups for the larger questions that started to surface in my mind. Questions like, what did this forest look like 300 million years ago? When did trees originally form? If the life of a tree can be 900+ years, do trees have a slower pace of evolution? If trees evolve so slowly, and evolution is such a key trait towards dominance, why are trees still so abundant? If you could trace back the evolution of trees to the first tree, which kind would it be? Did Spruce trees evolve from Western Hemlock, or vice versa? What was the first tree? Which organisms evolve the fastest and slowest? Did tree life need to evolve first in order to support animal life?

Since our guide wasn’t a full-fledged biologist, I didn’t ask him these questions. Plus, the answers I was getting from Claude (when service was available) were pretty good. At this point, my questions would have only tested his knowledge. Part of me wanted to ask the questions to the guide because I was proud of how clever they were, especially my question about the first tree, or what the forest looked like 300 million years ago. But looking clever didn’t seem purposeful, and there wasn’t time for endless Q&A on this short rainforest jaunt anyway.

These scenarios with LLMs were pretty cool. I started to get confident that I could find answers to nearly every question I had. I felt incredibly capable. What do I do with this infinite, immediate information about everything and anything?

If nothing else, asking questions to AI tools begat more questions, and that endless stream of questions made me more curious, observant, and attentive. A semi-quiet, beautiful forest was no longer just a beautiful temperate forest setting. It became a space for critical inquiry, for questions to answer. Each answer led to more questions and thoughts. Through these tools, my curiosity expanded, my attention focused and stayed present, and I was never bored.

Coming back to an earlier theme: are LLMs just good for creators? In these tourist scenarios, obviously not. They are useful in unfamiliar settings, as long as you have a curious attitude. Ask one question and you’ll find yourself asking half a dozen more. Will these tools catch on and become tools used by the mainstream? I’m not sure. The default search tools will morph more and more into LLMs. Maybe some day, we’ll use AI chat as the primary interface to the world’s information.

How will we use this information to not only answer basic questions like mine, but to move towards new questions that don’t have answers? Will these tools fuel and expand our curiosity? How will we synthesize the answers into new information that’s outside the LLM’s predictive reach?

var contents=new Array() //Paligo contents[0]='' //Acrolinx contents[1]='' //Xpublisher contents[2]='' //TWi contents[3]='' //Xeditor contents[4]='' //DevDocs //contents[1]='' // contents[3]='' var i=0 //variable used to contain controlled random number var random //while all of array elements haven't been cycled thru while (i<contents.length){ //generate random num between 0 and arraylength-1 random=Math.floor(Math.random()*contents.length) //if element hasn't been marked as "selected" if (contents[random]!="selected"){ document.write(contents[random]) //mark element as selected contents[random]="selected" i++ } }

Comparing Claude’s answers with ChatGPT

Let’s get back to comparing Claude versus ChatGPT. It just so happens that, during one of the ship excursions, we were in an excursion group (ziplines!) with the ship’s navigator. The navigator invited us on a special tour of the ship’s bridge, where we could see the captain, officers, and other crew using the navigation controls. To prepare for this special bridge tour, I started thinking up questions to ask:

  • Navigation. How do you navigate the ship? GPS? What if GPS goes down? Satellite GPS? If you had to navigate using another method, what would it be? Could you navigate by way of the stars?
  • Avoiding hazards. How do you see underwater to make sure we don’t run into anything like icebergs, whales, shallow water, large rocks, other stuff in the ocean? How do you avoid collisions with other cruise ships in the area? How much do you pay attention to weather in route planning and navigation?
  • Automation. Does the map data get delivered automatically to components of the ship, or does the crew here do all the navigation manually? How are AI tools used in navigation? Autorouting? Auto-parking? Bad weather avoidance? Collision avoidance?
  • City welcomings. Are cruise ships limited to the places they can stop due to the requirements of parking the ships? Are there any cities that don’t allow cruise ships to port? Do cities love or hate it when a giant cruise ship pulls into port?
  • Behind the scenes maneuvers. What are some things passengers didn’t even know about on this trip that might have been safety hazards we had to avoid or altered decisions made about our route?
  • Leaving passengers behind. What happens if a passenger doesn’t come back from an on-shore visit? How long do you wait? How frequently does it happen?
  • Sinking scenarios. What’s the most likely scenario that would cause the ship to sink? A tsunami from an erupting volcano? Terrorist takes control? Bilge pump malfunctions? Rogue waves that capsize the ship? Fire from an engine explosion?
  • Analytics from wearable tech. Can you use the medallions to track the most popular activities on the ship and then cater those activities around what people do the most? E.g., tons of metrics around people going to a certain show vs. not others? Can these analytics be used to better shape the activities and cruise experiences?
  • Crew chiefs. Is there someone in charge of the passengers overall? Is the captain mostly concerned about the ship while there’s like a head person over the passengers?

I did leverage the AI chatbots to help come up with some of the questions.

I initially planned to ask my questions to the bridge crew. But then I decided to ask Claude and ChatGPT for the answers even before the tour. Here’s my initial prompt for the chat session:

You're the captain of the Princess cruise line, a luxury ship that carries 9,000 people from Seattle to Alaska, stopping at Juneau, Skagway, and Ketchikan. You're giving a tour to a select subgroup of passengers. Answer these questions:

Then I went through each of the above bullets, pasting them in separately rather than all at once because the answers tend to be more detailed that way. You can view the responses here:

Looking at the responses, ChatGPT’s answers are more detailed and thorough. Claude’s answers are decent but shorter and more succinct. During ChatGPT’s responses, the chat session timed out twice, requiring me to hit Regenerate for the response. Additionally, ChatGPT took significantly longer to type out the answers than Claude.

During the actual bridge session (see pic below), I asked only a couple of questions. Claude and ChatGPT’s responses didn’t really predict what the navigator explained on the bridge in detail. But it was still fun to do this.

Bridge tour

Conclusion

In conclusion, Claude is the better choice when you have longer content to analyze, but ChatGPT might be better for shorter content. Both tools, however, can expand your curiosity and fill your mind with questions. This curiosity about the world around you can make you more attentive and present.


Alphadoc: Build API documentation that tells your API's story
17 September 2023 | 7:00 am

A few weeks ago I mentioned Alphadoc, a new tool for publishing API documentation. The following is a Q&A with Daan Stolk, cofounder/CPO of Alphadoc. In the following questions, Daan tells the story behind Alphadoc and what makes it unique from other API documentation tools.

Note: This is a sponsored post.

Tom: Can you tell us why you started Alphadoc? Does your company have an origin story? Was there anything lacking in the existing API tools in the marketplace?

Daan: We previously built API products in fintech and healthcare, and felt that tools on the market were focused on the technical side of things, whereas we wanted to help companies get commercial success with a great developer onboarding. We didn’t think that the world needs another docs platform, but rather a platform that helps to create an interactive onboarding experience for APIs, SDKs, and codebases, which helps to create and improve content and makes the content easy to understand (by making it much more visually appealing).

A large part of the success from companies like Stripe and Twilio is their focus on developer experience from the very beginning, and we wanted to productize this and offer it to the market, to make it accessible to all companies, even if they don’t have a dedicated developer experience (DX) team. So far our focus has been primarily on working with companies with OpenAPI specifications, but our latest feature is now helping companies that have SDK’s help explain how to use their products as well.

Tom: Looking at the recent posts on the Alphadoc blog, you're releasing major new features multiple times a month. Are you nearing an official release date yet for Alphadoc?

Daan: We are shipping big features monthly since the start of this year. Together with design partners we have shaped the product to the point where it is today, and we intend to open up for wider availability at the end of September.

Tom: What is the new Code-Walkthrough tool about? Also, there's also a sequence diagram builder?

Daan: We have already built a complete platform for displaying REST APIs and making guides based on said APIs. Companies were coming to us that have more than just REST APIs. They need to combine documentation for SDKs and REST APIs. We’ve released a new feature called Code-Walkthroughs, which allows for highlighting of code and creating step-by-step guides with multiple files as well.

Code Walkthrough

We also automatically generate diagrams for REST API “chains.” To create REST API guides, you add endpoint blocks to a tutorial. You can configure endpoint blocks to only display the parameters you want to display (see the following screenshot).

To create a chain, add two (or more) endpoints in a tutorial together by indicating that a value in the response of one endpoint should be used in the request of the next endpoint. The diagram that we generate displays a data relationship which represents the chain you just made. Any updates you make to the chain will automatically reflect in the diagram.

In the diagram more information is made available as you click through the blocks. Both the endpoints and the diagram automatically update if a new version of the API spec is updated.

Chaining

Chaining

We’re releasing a new feature soon—Sequence diagrams. You will be able to create sequence diagrams by quickly clicking together the lifelines (vertical lines) and messages (these will represent API calls), and you get more flexibility compared to the generated diagrams.

Sequence diagrams

So, one version of the diagrams is automatically generated when you create a tutorial with API chains. The other version is a sequence diagram that has more flexibility.

Tom: I saw that change logs are automatically created. How is this being done and how are you integrating AI features?

Daan: We indeed support automatically creating comparisons whenever you add a new version of your OpenAPI spec to our platform. Our AI functionality is a chat interface built on top of OpenAI’s API—we focused on helping developers get an answer to their question as quickly as possible. We use all the content that’s added to the platform, which can be written guides, code, APIs, and SDKs.

The feature is still experimental but response so far has been great. The user can ask any question about your content, and we’ll answer. All data submitted to OpenAI has been opted out of training their models.

AI functionality

Because of the nature of technical documentation and the need for it to be correct, we built it in a way that minimizes hallucinations (AI making up things). In the future we’ll be using the feedback derived from user behavior to create content suggestions. When several people ask similar questions, it likely needs to be clarified in your docs.

Using insights to improve the documentation is still something we’re working on (and are keen on getting feedback!). We’re also very aware of privacy concerns when capturing search queries done by developers and will introduce a compliant way for our users to investigate these queries in the future.

A funny side effect is that with the recent explosion of LLMs, the threshold to write code and build software and integrations is lowered significantly, and that makes it even more important to have a developer onboarding that’s understandable for both seasoned and citizen developers.

Tom: Regarding support for OpenAPI, a lot of tools say they support OpenAPI but might not support some of the more advanced specification features. Can you talk a bit about why Alphadoc has deep support for the OpenAPI spec? I think many tech writers might not even be aware of tooling limitations for some schemas until they encounter them.

Daan: And the same goes for us—we were only made aware of the depth of our support for the OpenAPI spec when we encountered users that tried and tested various other solutions and that none of them had full support for some of the options in the spec. OpenAPI has built-in concepts to combine and reuse schema’s across the entire specification (Google “OpenAPI polymorphism” to learn more!). The most common occurrences are using anyOffs, oneOffs, and allOffs.

From a tool vendor perspective this introduces complexity which we’ve decided to tackle head on. If you have used any of these concepts in your OpenAPI file you’ll see it properly rendered throughout the platform, down to code snippets updating based on what you have selected.

Another good example of a deeper level of support is our ability to display self-referencing schema’s, also known as circular references. When a schema references itself, it can break some of the popular open source OpenAPI tools.

There are still many OpenAPI concepts we have yet to tackle but see a ton of value in providing support for all of them as time goes on.

Some of our users have really large APIs with thousands of endpoints—a scale our platform easily supports. In terms of performance, we made a ton of improvements to offer a snappy experience.

Tom: How do you enable collaboration across tech writers, product managers, developers, and marketers? Do they use Git with version control to collaborate? Or does Alphadoc provide other collaboration workflows? How do you balance robust tools with simple interfaces and workflows that less technical people can interact with?

Daan: We frequently work with companies that want to make it easier for people to collaborate internally on developer docs. We started working with a flexible WYSIWYG editor. Markdown can be typed in the UI and is immediately rendered. We are also offering engineers ways to automatically push new versions of their API spec to our platform via Git.

One of the most common points of feedback we’ve received was that writers want to manage all the content (not just the API) in a Git workflow. We’re happy to mention that before the end of the year we’ll support docs-as-code workflows on top of the browser-based editor.

Today we also support embedding our components into other solutions or platforms, which makes Alphadoc modular and available in combination with other content platforms. One of our users has already embedded a tutorial directly into their application to help developers immediately see how to use their API when they have created their API key.

Tom: You mentioned that Alphadoc supports complex storytelling for varying levels of technical users. Storytelling isn't a feature I commonly see promoted in API tools. Can you talk about what you mean by storytelling and maybe provide an example?

Daan: At Alphadoc, when we talk about “storytelling,” we mean more than just documentation; we’re focused on creating an interactive and engaging narrative around your APIs and SDKs. It’s about guiding users through the functionalities and possibilities of your product in a coherent manner. Our platform allows you to configure each API endpoint, work with variables, and establish chains of endpoints.

This approach empowers companies in various industries, such as Ecommerce and Fintech, to effectively convey their product stories. Companies in the Ecommerce and Fintech industry, for example, have many use cases in their APIs where data (IDid’s) needs to be carried over between endpoints (carrying over data between API calls). They also often have endpoints that are very flexible, which makes it difficult for them to explain all possible scenarios on how to use them.

The tutorials and diagrams in Alphadoc allow you to completely configure each endpoint when you add it to a page, work with variables (think Postman), and chain up endpoints. When everything has been configured, we provide helpful code snippets for each API call and automatically generate interactive diagrams that display the relationship between all the steps. The developer who is integrating can try out API calls with live data.

To see this concept in action, explore our own documentation at docs.alphadoc.io, where you can witness how we transform technical documentation into an interactive storytelling experience.

Tom: How can people learn more about Alphadoc and try it out?

Daan: To start building your docs & DX with Alphadoc, please fill out the sign-up form or visit our website at alphadoc.io. We’ll review your request for access within 1 business day. If granted access, you can upload your OpenAPI spec and easily create a compelling developer experience.

var contents=new Array() //Paligo contents[0]='' //Acrolinx contents[1]='' //Xpublisher contents[2]='' //TWi contents[3]='' //Xeditor contents[4]='' //DevDocs //contents[1]='' // contents[3]='' var i=0 //variable used to contain controlled random number var random //while all of array elements haven't been cycled thru while (i<contents.length){ //generate random num between 0 and arraylength-1 random=Math.floor(Math.random()*contents.length) //if element hasn't been marked as "selected" if (contents[random]!="selected"){ document.write(contents[random]) //mark element as selected contents[random]="selected" i++ } }
b,strong {color: #444}

New API course topic: Using AI for summaries
7 September 2023 | 7:00 am

Providing summaries of content is one of the most useful and powerful capabilities of AI chatbots powered by large language models (LLMs), like ChatGPT, Bard, and Claude. As such, AI chatbots can significantly help tech writers in a variety of documentation-related tasks, such as generating summaries at the top of each document, generating product overviews that summarize features and capabilities, and helping tech writers process content more quickly from long articles, bugs, meetings, and other documents.

See Using AI for summaries to read the article.

var contents=new Array() //Paligo contents[0]='' //Acrolinx contents[1]='' //Xpublisher contents[2]='' //TWi contents[3]='' //Xeditor contents[4]='' //DevDocs //contents[1]='' // contents[3]='' var i=0 //variable used to contain controlled random number var random //while all of array elements haven't been cycled thru while (i<contents.length){ //generate random num between 0 and arraylength-1 random=Math.floor(Math.random()*contents.length) //if element hasn't been marked as "selected" if (contents[random]!="selected"){ document.write(contents[random]) //mark element as selected contents[random]="selected" i++ } }


More News from this Feed See Full Web Site