Bryan Braun has an interesting post about his experience with what he calls a “haunted domain”:
Being able to ascertain the reputation and vitality of a domain is an intriguing problem.
At a societal level, it’s not really a problem we’ve had to wrestle with (yet). Imagine, for example, 200 years from now when somebody is reading a well-researched book whose sources point to all kinds of domains online which at one point were credible sources but now are possibly gambling, porn, or piracy web sites.
If you take the long view, we’re still at the very beginning of the internet’s history where buying a domain has traditionally meant you’re buying “new”.
But what will happen as the lifespan of a domain begins to outpace us humans? You’re going to want to know where your domain has been.
The “condition” of a domain could very well become a million dollar industry with experts. Similar to how we have CARFAX vehicle history reports, domain history reports could become standard fare when purchasing a domain.
In fact, parallels to other real-world purchase verification and buyer protection programs is intriguing.
Establishing legal ownership of a vehicle through titles is something the government participates in. You’ll often hear people advertise a used vehicle as having a “clean title”, meaning it doesn’t have a salvaged or rebuilt title (which indicates a history of damage).
But how the car was used is also important. Was the car part of a fleet, e.g. a rental car? Was it owned by a private third-party its whole life? Did that person drive Uber? So. Many. Questions.
Even where the car has been is important. I remember buying a car in Las Vegas once and the dealer was telling me how people from New England frequently fly into town to buy a car. Why? Because they want a car whose lifespan had been in the dry desert and would thus be free of the effects of harsh winters (rust from snow/salted roads, etc).
When you buy a house, getting an inspection is pretty standard fare (where I live). Part of the home buying process includes paying an inspector to come in and assess the state of the home — and hopefully find any problems the current owners either don’t know about or are keeping from you.
But the history of the home is often important too. Was it a drug house? Was it a rental? An Airbnb?
Houses have interesting parallels to domains because their lifespans are so much longer than other things like cars (my current house was built in the 1930’s). Homes can go through many owners, some of whom improve it over time while others damage it.
You know those home renovation shows? The nerd version of that would be a “domain renovation” show where people buy old domains, fix them up (get them whitelisted with search engines again, build lots of reputable inbound links, etc.), and then flip them for a good price.
What I love most about this world is the verbiage around the various grading systems, such as:
Even within “Mint” there are various categorizations such as:
Can you imagine something similar for domains?
Honestly, I don’t really have anything else to say about this, lol. It’s simply fun to think about and write down what comes to mind.
That said, it wouldn’t surprise me if a standardized grading system with institutional verification rises up around the world of domains. URLs are everywhere — from official government records to scrawled on the back a door in a public restroom — and they’ll be embedded in civilization for years to come. (Even more so if domains become the currency of online handles!)
Today usa.gov
is backed by the full faith and credit of the United States Government. But a few hundred years from now, who knows? Maybe it’ll be an index for pirated music.
Whatever happened to romanempire.gov
?
Nicholas Carr, one of my favorite technology writers, has been blogging over on Rough Type since [checks archives] 2005. As of late his writing has gone quiet, but he’s got a new book due out early next year and I think he’s starting up blogging again to help drum up interest.
However, he’s not blogging on Rough Type anymore. He has a new blog called New Cartographies. If you’ve not already subscribed, this is a public service announcement to do so.
(Also, interesting meta conversation here: he’s a NYTimes best-selling author, he’s had a long-running blog built on WordPress, but now he’s switching to Substack — I wanna know all the deets, who can report on this? ha.)
His first article, titled “Dead Labor, Dead Speech”, is a good ’un:
LLMs...feed “vampire-like” on human culture. Without our words and pictures and songs, they would cease to function. They would become as silent as a corpse in a casket.
It’s interesting how the advent of social media made it easy for people to create content, and the media platforms turned all that speech into content for humans (a complete mass of user-generated content tailored for relevancy to you).
Now LLMs are taking all that speech and turning it into content for machines:
They use living speech not as content for consumer consumption but as content for machine consumption
But it’s even more than just consumption. Previously, plagiarizing the speech of others on social platforms required effort on your part to obfuscate individual voice as well as sources. But now you can pass the speech of others through an LLM and let the machine do all that work for you! A form of knowledge laundering if you will.
But, as Carr is always quick to point out, we shape our tools and then they shape us:
The mind of the LLM is purely pornographic. It excels at the shallow, formulaic crafts of summary and mimicry. The tactile and the sensual are beyond its ken. The only meaning it knows is that which can be rendered explicitly. For a machine, such narrow-mindedness is a strength, essential to the efficient production of practical outputs. One looks to an LLM to pierce the veils, not linger on them. But when we substitute the LLM’s dead speech for our own living speech, we also adopt its point of view. Our mind becomes pornographic in its desire for naked information.
Can’t wait for his new book!
Native apps are all about control. Don’t like thing X? You can dive in and, with enough elbow grease and persistence, finally get what you want. Write your own C library. Do some assembly code. Even make your own hardware if you have to.
But on the web you give up that control. Can’t quite do the thing you want? You’re options are: 1) make a native app, 2) make a browser that does what you want (see: Google), or 3) rethink and reset the constraints of your project.
But when you choose to build for the web instead of native, you’re not just giving up control in return for nothing. It’s a trade-off. You trade control for reach.
For example, browsers won’t just let any website read and write from disk (who wants to grant that to any website in the world?) You can view that as a loss of control, but I see it as a constraint and a trade-off. In return, you get a number of security and privacy guarantees — and you also get reach: now any computer, even ones without hard drives, can access your website! (The same goes for any hardware feature, like webcams, microphones, geolocation, accelerometers, etc.)
The baseline assumptions for browsers are incredibly more broad and inclusive than native apps. It’s the “lowest common denominator” of computing. That sounds like a derisive label, but it’s not. Once again, it’s about trade-offs.
In math, the lowest common denominator is about simplification. It makes calculation as simple as possible. And there’s a parallel here to computing: browsers reduce the computing tasks available across a variety of devices to be as simple as possible, which means you can reach more devices — and therefore people.
So while the term “lowest common denominator” can be used in a disparaging manner — like it’s the most crude, basic form of something available — if your primary goal is reach, then basic is exactly what you want.
Reply via: Email :: Mastodon :: Twitter
Tagged in: #webPlatform