A Well-Known Links Resource
15 August 2022 | 7:00 pm

In my previous article exploring what might go on my blog’s home page, I wondered aloud:

It’d be neat to be able to surface (credible) sites that are linking to my posts, like, “[Post Foo] linked from the Sidebar.io newsletter” or, “[Post Bar] linked from css-tricks.com”.

That got me thinking about another previous article article where I explored the idea of creating and surfacing an index of all the outbound links on my blog — something you can see here.

That index of links is mine and its representation is an HTML document within the context of my blog. But it got me wondering: why just for me?

What if everyone — individuals, companies, etc., — surfaced their outbound links in an open, accessible way which could then be aggregated in one source for querying?

Individual Websites

First we’d need a convention for the location of this resource as it would need to be consistent across websites. Good thing we already have a convention for that! The well-known URI. In my case, I’m going to commandeer /.well-known/links.

Any domain that supports this convention can make a giant JSON blob of all their (outbound) links accessible at this endpoint, grouped by domain e.g.

[
  {
    domain: "twitter.com",
    count: 129,
    links: [
      {
        sourceUrl: "https://blog.jim-nielsen.com/..."
        targetUrl: "https://twitter.com/...",
      },
      {
        sourceUrl: "https://blog.jim-nielsen.com/..."
        targetUrl: "https://twitter.com/...",
      }
      // 127 more...
    ]
  }
  // More domains here...
]

Additionally, you could allow people to query a particular domain for your site’s links. For example, you might ask, “I wonder what links Jim has to my site?” And the answer would be a query to:

https://blog.jim-nielsen.com/.well-known/links?domain=mysite.com

The response to that query is some JSON that will list every URL on blog.jim-nielsen.com that links to mysite.com.

Here are a few examples from my blog:

If this pattern were more broadly available on the web, it would empower me to ask something like, “I wonder where CSS-Tricks links to me?” And I could query here:

https://css-tricks.com/.well-known/links?domain=blog.jim-nielsen.com

And get an answer! Every URL on CSS-Tricks that links to my blog. A form of bi-directional link metadata.

All The Websites

Now, it gets a little trickier if you want to know more broadly, “Who is linking to me?”

Trying to find all the domains on the web that support /.well-known/links would be hard. However, in theory someone could figure it out — index every site you can find that has data at /.well-known/links and put it into one giant, queryable resource.

That’s where the magic would be.

Imagine if someone like Google — who killed the link: search operator BTW — made this index and allowed you to query it. All you had to do was type:

google.com/find-my-links?domain=blog.jim-nielsen.com

And, given its own database of /.well-known/links from across the web, it could serve back links from all the sites on the internet it knows about.

I can imagine that being an incredibly valuable resource, so getting someone to do it freely and openly might seem a bit crazy.

Not to mention that you would have to get people to actually adopt the pattern of indexing all their outbound links and making them available in a standardized format at /.well-known/links.

And it probably doesn’t work at scale. I mean, an index route for twitter.com/.well-known/links would be wildly big! But, then again, a call to twitter.com/.well-known/links?domain=blog.jim-nielsen.com would be wildly interesting!

It’s a fun thought to entertain.

Available Now

As mentioned, I’m already collecting all the external links on my site, so I went ahead and made that data publicly available as a prototype of the above concept.

You can hit blog.jim-nielsen.com/.well-known/links and see the data for yourself. Do a CMD + F in the response to find your domain, or you could try hitting /.well-known/links?domain=xxx with your domain and see if there’s a hit.

You should try indexing all the outbound links on your site. Do it for yourself, you’ll likely find some interesting or even surprising patterns. And then, once you’ve done it for yourself, you may as well make it publicly available for others to look at and consume. Why not?



</the-end>   Reply via email or twitter.

Playing With My Blog’s Home Page
14 August 2022 | 7:00 pm

I’ve been wanting to play with my blog’s home page. Previously, I had two lists: 1) most recent, and 2) popular (based to the last 30 days of analytics data from Netlify).

I thought, “What other kinds of lists could I add?” Two ideas came to mind:

  1. My own personal favorites
  2. Stuff that’s been popular on HackerNews

Number one seemed like a great idea and was easy enough to implement (it’s actually a list of about ten which randomly get selected each time my site builds anew).

Number two I mostly wanted to do for fun, like “Can I do it?” I find integrating with a JSON API strangely fun.

I’ve seen tons of Hacker News clones, so I knew there must be some API I could use to search for links that’ve been posted for my domain blog.jim-nielsen.com.

Sure enough, Algolia’s Hacker News search API was precisely what I wanted. A quick read through the docs and I found the API call I needed: a search query restricted to just the URL.

http://hn.algolia.com/api/v1/search?query=blog.jim-nielsen.com&restrictSearchableAttributes=url

It gives back a list of hits, each of which contains a few pieces of info I could use. Here’s a sample:

{
  "hits": [
    {
      "title": "Canistilluse.com",
      "url": "https://blog.jim-nielsen.com/2021/canistilluse.com/",
      "points": 555,
      "num_comments": 361,
      "objectID": "28309885"
    },
    {
      "title": "Inspecting Web Views in macOS",
      "url": "https://blog.jim-nielsen.com/2022/inspecting-web-views-in-macos/",
      "points": 536,
      "num_comments": 140,
      "objectID": "30648424"
    }
  ]
}

I added an async step to my build to go fetch this data and make it available in the model of my site’s data for use in templating.

That got me a list of “HackerNews Hits” on my home page:

Reading comments on HackerNews isn’t usually my thing, but if it’s yours, now have a potential signal for where to start reading on my blog.

After sitting on it a little longer, I decided to cut down each of my home page lists to three items each.

I kind of like where this ended up. Rather than just a giant chronological list of posts (which I have in the archives), I’ve got some modicum of curation going on. The conversation in my head is: “New to the blog and not sure what to read? Here are a few points of interest that could serve as jumping-off points.”

  1. My most recent posts
  2. Posts that are being viewed a lot right now
  3. Posts that, historically, have ended up on HackerNews and had the most comments
  4. My own personal favorites

What’s fun about these lists is how dynamic they are. If a post goes and gets tons of comments on HackerNews, it’ll show up on home page the next time my site builds.

It kind of makes me want to add a few more lists, I’m just trying to think of what those could be?

It’d be neat to be able to surface (credible) sites that are linking to my posts, like, “[Post Foo] linked from the Sidebar.io newsletter” or, “[Post Bar] linked from css-tricks.com”. I’m not sure how you’d do that without some form of human curation — "an automated tool that indexes the web and surfaces credible sites linking to yours” sounds a lot like rebuilding Google.

If I had webmentions setup, maybe I could pull some interesting stats out of that for the home page? Like:

  • Post Foo (15 recent mentions, including css-tricks.com and twitter.com)
  • Post Bar (20 recent mentions, including domain.com)

I just haven’t been able to muster the energy to setup webmentions yet. I’ll think about it some more. And I’ll think about what other possible angles of curation I could invent for my home page.



</the-end>   Reply via email or twitter. Tagged in #myBlog.

Things You Can And Can’t Do
12 August 2022 | 7:00 pm

I heard Ryan F. say:

You can make your server fast. You can’t make your users’ network fast.

And it got me thinking about what you can and can’t do — what you do and don’t have control over.

You can’t:

  • Make your user’s network transfer faster
  • Make your user’s device compute faster

But you can:

  • Make your user’s network transfer less
  • Make your user’s device compute less

A server can make a better user experience, in some cases, because it can do compute and network tasks on behalf of the user — you do it so they don’t have to.

  • Do compute (e.g. templating and business logic) on your server, then you don’t have to transfer the compute logic over the network and to your user’s device for execution. You can make your server’s compute faster, for example by beefing up its hardware and increasing its software optimizations.
  • Do network calls on your server, leveraging the internet backbone connectivity that is available in the networking infrastructure of your cloud computing provider. Your server probably has a great internet connection. Your users? Who knows.

Again, you can’t make a user’s network faster. Nor can you make their device hardware faster. However you can make the experience on their network and hardware perceptually faster through a smaller workload on their network and device aided by heavier lifting by your server.

In this way, you don’t always put the onus on the user’s device and network connection to self serve to your website or platform.

As the famous saying goes: “Grant me the serenity to accept the things I cannot change”, like a user’s network and hardware, “the courage to change the things I can,” like my server’s workload on its network and hardware, “and wisdom to know the difference.”



</the-end>   Reply via email or twitter.


More News from this Feed See Full Web Site