"The web" is full of ads. Google search is unusable. Ad-blocking is fought against. Content is tailored to be as long as possible to render more ad impressions. Mobile views have 3% content and the rest is ads. 27 compliance popups about cookies and tracking and offers would show on top of the information that you're looking for, all of them rigged so you click the option that they want.
Is that the web you want to save? Let it die.
Because this romantic view of the web as this "ocean of free information" has been dead for a very long time.
I wonder why someone would even be surprised that people just moves naturally to something better? Something that's not even remotely so hostile to the user?
And yes: when VC capital dries up, AI will become equally hostile.
Then people will move to the better thing and we'll have articles about "Better thing is killing AI".
Yes... but that ocean has a lot of trash and algal blooms to work around. If you're not careful, you end up the sea turtle with a stomach full of plastic.
That issue of varying quality of web-based information (and varying ability to assess said quality) has also been the case for a long time.
This is true and hardly a secret and the fact the Economist is being forgetful here is itself information.
So the actual question here is what are the (financial, geopolitical, social engineering) incentives for the stakeholders of the Economist (please spare me "journalism" tropes) to poo poo AI in this manner.
> With fewer visitors, Stack Overflow is seeing fewer questions posted on its message boards
When you operate a community that's hostile to questions that have already been answered, are poorly researched, or are homework, don't be surprised when people start taking those questions elsewhere, and don't be surprised when they start asking their good questions elsewhere, too.
Stack Overflow is blaming AI for its failing when it's been a steaming pile for a decade.
It's basically always been unusable for anything embedded related, because every question gets closed and marked as a duplicate of some desktop/web/mobile question with 100,000x the RAM
I would definitely be surprised if people who've been asking good questions on StackOverflow so far would suddenly ask them elsewhere, since part of what makes a question good for StackOverflow is that it cannot be easily answered elsewhere. E.g. MathOverflow is a forum for mathematicians to talk about their research, they're unlikely to use LLMs for that anytime soon.
So fewer people asking questions doesn't mean the community is dying, it might very well be a sign that they finally succeeded in their war to keep everyone else out.
Probably bad for the company milking the community for profit, though.
After being bullied about a couple of questions which were not already answered in Stack Overflow, I stopped participating StackExchange completely.
The point system which meant to motivate people to contribute became the bar itself. Lower score meant you were not taken seriously or considered a noob who stopped using pacifiers and started using computers 30 minutes ago.
So, I returned to what I did best. Digging documentation and taking my notes. They can pat themselves on the back for keeping the purity and spirit of the network.
> MathOverflow is a forum for mathematicians to talk...
When StackOverflow was new I visited frequently to enjoy the community talking about programming. For others, the goal was always to build the ultimate wiki.
The people who wanted the ultimate wiki won, and the community left, and that's where we see SO today. No community, but it is the ultimate wiki filled with programming wisdom from 2014.
Ironically, they insisted at a very narrow definition of the question and answer format and consequently failed at creating a good wiki. There is no way to approach broader topics. It’s always „how do I do x“, and the accepted answer since 2013 is a jquery plugin which is missing a maintainer since 2015.
I don't even think it's been about asking good questions, at least not for many years. I used SO regularly and even when I wasn't asking a question at all, it was an incredible slog to just find a concise and correct answer instead of sniping comments from people who wanted to flex knowledge, or others answering from some oblique perspective that had nothing to do with the original problem. Asking good questions is always important, yes, but if the community does not know how to provide good answers, seekers seek elsewhere.
Side note, great html book here on asking good questions, since we still absolutely have that problem to deal with even when using Gen AI as a starting point; http://catb.org/~esr/faqs/smart-questions.html
>So fewer people asking questions doesn't mean the community is dying,
If you're not growing, you're dying. Businesses completely perverted that saying, but the basis for it is still true. People move on, change inerests, or simply die. You can't have a healthy long term community without new members coming in.
Seeing the culture of StackOverflow, I would choose to be dead in the ground rather than ask a question there, even a question that can't be easily answered elsewhere. Volunteer-run sites need a mix of enthusiasts who do the real work of the site and janitors who keep things from being too chaotic, but enthusiasm dims while the desire to impose order never does, so eventually the janitors take over.
MathOverflow has a much better culture, so I ask (and answer) questions there. I'm not quite sure why it's worked out better there, though I suppose it's something to do with the population.
Yeah, asking a programming question without some bitter old coder tut-tutting you is very much a selling point with AI chatbots, regardless of my reservations with the overall trend.
True, but the LLM is always polite and in problem solving mode, while SO is in a curating mode. This makes it a great knowledge base and LLM training set, but not a great source to have your questions answered.
I think a big part of why people prefer to ask an online forum instead of using the search function is the human interaction aspect, but that requires two people, including a mentor who is patient and helpful - and unfortunately, that's difficult to find. An LLM is patient, helpful, and problem-solving, but also responds pretty much immediately.
Sure, but so will SO. On most questions it seems that at least a third of the answers are just wrong in one way or another. At least with an LLM you can just tell it's wrong and have a new, hopefully better, answers 30 seconds later, all without hurting its feelings or getting into a massive online flame war.
It was bootstrapped from SO. Now there are third party data companies like Scale AI that pay gig workers to write code examples for LLM training. It's died down but I saw lots of spam for this (ie hiring) earlier in the year.
Sota LLMs didn't get that way by scraping the internet, it's all custom labeled datasets.
Plus, they're getting real world training data from everyone who either hasn't or doesn't have the ability to opt out of their stuff being used.
For my personal stuff, I don't opt out of training for this very reason. What's more, I resent Stack Overflow and Reddit etc. trying to gate-keep the content that I wanted to give to the community and charge rent for it.
I used to intentionally post question-answer style posts where I would both ask the question,wait for a while, then answer the question on both Reddit and Stack Overflow. I don't do that anymore because I'm not giving them free money if they're not passing some of the benefit on to the community
> For my personal stuff, I don't opt out of training for this very reason. What's more, I resent Stack Overflow and Reddit etc. trying to gate-keep the content that I wanted to give to the community and charge rent for it.
And AI companies don't charge for their stuff and charge rent?
I'm afraid programming is going to be frozen at 2020s tech for the foreseeable future. New frameworks, libraries and languages will suffer from a chicken and egg problem where no one uses them because LLMs don't know how to answer questions about them and LLMs can't learn the new stuff because programmers aren't generating new samples for the LLMs to ingest.
This is why I've had to spend a huge amount of my free coding time this year documenting my canvas library[1][2] in a way that can (potentially[3]) be used as LLM training data instead of, well, developing my library with new and exciting (to me) features.
On the silver lining side, it's work that I should have been doing anyway. It turns out that documenting the features of the library in a way that makes sense to LLMs also helps potential users of the library. So, win:win.
[3] - Nothing is guaranteed, of course. Training data has to be curated so documentation needs to have some rigour to it. Also, the LLMs tell me it can take 6-12 months for such documentation to be picked up and applied to future LLM model iterations so I won't know if my efforts have been successful before mid-2026.
I've been wondering the same too. uv has completely transformed the Python workflow, and I really hope future documentation and knowledge bases incorporate it, but time will tell.
Or developers will have more free time to solve novel problems instead of wasting hours digging through Google results and StackOverflow threads to find answers to already solved problems
They will be writing the answers into codebases that AI will be ingesting, but it will lack any context about the question it is answering so AI won't know how it relates to anything else
Yeah, I think that too. Same with non programming domains. Since your blog and what not wont be seen, just ingested by LLM, there will be even less motivation to write them. And they were already dying also due to need for SEO, otherwise you dont exist.
So, that stuff will just cease to exist in its previous amounts and we will all move on.
Small models aren't large enough to have knowledge about every single framework or library through pre-training and yet if you give them a man page/API reference they easily figure out how to use the new code.
A lot of the questions asked on stack overflow can be answered by reading the source code and documentation of the libraries and frameworks. An LLM can do the same thing. It doesn't need stack overflow for knowledge or content, it needs it for the question->answer format.
Sure, but the technology peels off the aggravation and delivers the content without the asshats.
If someone stuck an LLM between me and facebook, so I got all my facebook content without the flat earthers, moon landing deniers and tartarians, meta would never see me again.
That's not good enough. The AI has to give me updates on important events of my friends and family without showing me everything they ate at restaurants, funny cat videos they liked, or what movies they planned to watch.
Proving that AI is not just parroting back what it reads on the web, ChatGPT manages to correct my programming mistakes without making me feel bad. If it learned from Substack, I'm glad it learned selectively!
I'd rather be treated nice by a bot, than be abused by a human. Make whatever of this you will.
Though I know the bot is not sentient. I'd rather chat with it, than some human who doesn't talk well.
Im guessing the future of relationships works the same way. All the best competing with a bot that makes you feel nice, than a spouse/partner who doesn't.
It will be a hard era to come for people who misbehave. The tolerance for that sort of stuff is going to go away entirely.
Which is good to some extent. We have people off the deep end sharing “unlock/jailbreak prompts” which turn LLMs in to schizophrenia machines affirming any psychosis. While a real person would push back and try to get you help.
ChatGPT can’t tell the difference between being given a harmless instruction / role play prompt, vs someone who is going insane. Probably explains why many of the most vocal AI users seem detached from reality, it’s the first time they have someone who blindly affirms everything they think while telling them they are so smart and completely correct all the time.
When I look at how far tech has come in my own life, I'm mid 50's, I don't think the singularity is out of the question in my kids life, or even my own if I'm lucky. When I born there was no such thing as a PC or the internet.
As far as I'm aware, the only missing step is for the llms to be able to roll the results of a test back into its training set. It can then start proposing hypotheses and testing them. Then it can do logic.
I don't understand the skepticism. LLMs are already a lot smarter than me, all they need is the ability to learn.
** Wikipedia definition of singularity. "an upgradable intelligent agent could eventually enter a positive feedback loop of successive self-improvement cycles; more intelligent generations would appear more and more rapidly, causing a rapid increase ("explosion") in intelligence that culminates in a powerful superintelligence, far surpassing all human intelligence.[4]"
That's highly doubtful, not unless your definition of intelligence requires volume of regurgiting information and not contextualizing and building on such knowledge. LLMs are "smart" in the same way a person who gets 1600 on the SAT* is "smart". If you spend your time minmaxing towards a specific task, you get very good at it. That skill can even get you as far in life as being a subject matter expert. But that's not why humans are "inelligent" in my eyes.
*yes, there is correlation. Because people who take the time to study and memorize for a test tend to have better work habits than those that don't. But there's a reason why some of those kinds of students can end up completely lost in college despite their diligence to study.
>I don't understand the skepticism.
To be frank, we're in a time where grifts are running wild and grifters are running away red handed. Inside and outside of tech. I am very septical by default in 2025 for anyone who talks in terms of "what can happen" and not what is actually practical or possible.
Until now computing was running on a completely different model of implied reliability. The base hardware is supposed to be as reliable as possible, software is supposed to mostly work and bugs are tolerated because they're hard to fix. No one is suggesting they're a good thing.
LLMs are more like something that looks like a text only web-browser, but you have no idea if it's producing genius or gibberish. "Just ignore the mistakes, if you can be bothered to check if they're there" is quite the marketing pitch.
The biggest development in tech has been the change in culture - from utopian libertarian "Give everyone a bicycle for the mind and watch the joy" to the corporate cynicism of "Collect as much personal information as you can get away with, and use it to modify behaviour, beliefs, and especially spending and voting, to maximise corporate profits and extreme wealth."
While the technology has developed, the values have run headlong in the opposite direction.
It's questionable if a culture with these values is even capable of creating a singularity without destroying itself first.
It's an easy critique of stack overflow, sure, but the same applies to reddit tbh. It's quickly becoming far more worthwhile to chat with AI than get angry at stupid, predictably reactionary reddit comments - and you’re not reaching many people, you’re just used for training a model, or for advertising opportunities for sleazy subreddit owners
You'll see reason for the hate, mainly with people not bothering to spend any time searching before posting.
And it is getting worse, new people asking help: 'but chatgpt told me X', 'I followed chatgpt and it doesnt work, please help fix bug', or some idiots that might burn the house down and deserve yelling (li-ion batteries aren't a joke, ac current likewise)
Or... LLM generated stuff... which is equal to spam...
If some people like doing unappreciated tech support all power to them, others might fight through spam to find nice items, I mostly stopped bothering and looking for something else. (also yelling at idiots that might kill themselves)
If you block beginners questions and posts then the entire community becomes pretty hostile and annoying to be in.
If you don't block beginners then the entire community will leave and you end up with the /r/suggestALaptop type subreddit. A woodworking subreddit will have 3 daily "What's the best table saw for a beginner" and "Dewalt vs Milwakee?" threads and anyone who cares will leave and you're left with all the bots and people trying to sell you stuff.
The funny thing is that didn't use to be a problem in online communities back in the day. Every forum has a "New Users" section, a beginner section, maybe an intermediate section and an advanced section. There were forums I would hang out on in the beginners and common areas and only readonly the advanced area until I felt confident enough to participate in the conversation there intelligently or to even have a smart enough question to ask.
This doesn't work in a place like reddit or stackoverflow. Those places are simply too big to have a cohesive consistent "culture" (for lack of a better word). You can't turn newbies away from /r/3dprinting because no body is on /r/4dprinting_for_beginniers. And people on the former don't care about the latter because it's not part of "the community".
Back in the day it also used to be a problem and answer used to be 'get better' or 'rtfm' and general laughter.
That I used to find mean, now I see it as necessary but nobody does it anymore (lack of anonimity I guess)
If someone fails to do basic research that then it's on them. They lack basic grit or other skills that they should learn.
Also, someone asking the same basic question that, id typed in google would have led them to previous threads is a special type of idiocy or attention seekinf
> You'll see reason for the hate, mainly with people not bothering to spend any time searching before posting.
As a beginner at anything it’s hard to search. It’s the “you don’t know what you don’t know problem”. I see this all the time both as the expert and the beginner.
On topics I understand, I can craft a google query that will drop exactly what I’m looking for as the first result. On new topics I have to query and scan over and over until I start to hone in on some key words.
I still use SO out of habit, but they make it really difficult to use it. Everything on the web is gated behind ridiculous captchas now. More than the AI, the legacy websites will die because they are too busy trying to prevent AI companies from scraping the content and have ruined the product in the process.
Agreed, but SO was bad before LLMs showed up. Almost all expert communities seem to develop problems with gatekeepers and ego trippers past a certain size.
But gatekeeping is actually good if you care about quality, and I think we're going to discover that more with LLMs
They might democratize code but the code produced will be very low quality. Once coding communities start getting overrun with "Please help me fix my LLM generated code" we'll wish we did a bit more gatekeeping
I'll take that job for the right pay, so I don't see the problem. Stack overflow gatekeepers close questions, with links to a similar question, and reasons why it's not a duplicate, as a duplicate. Better yet, the answer scoring system keeps accepted answers from 7 years ago as the top rated answer when the library and ecosystem's moved on, and that answer no longer works. Gatekeeping itself is not inherently bad, I agree, but Stack Overflow has become aggressively hostile to its original mission.
I prefer Reddit communities over SO any day. SO, folks are so high headed, they will bash you with anything that doesn't suit their framework. I am sure with GPTs slowly they will lose traffic.
Threads don't get closed due to age on Reddit (they used to be archived but this stopped a while back). Mods can lock threads but this is used to moderate content.
And which subreddit locked your thread because a similar question was asked six months ago? I find that difficult to believe.
Stack Overflow existed because it had a moat on specific bug-fix related coding information that wasn't available elsewhere, or the mechanisms/community to solicit that information wasn't available elsewhere. Its moat naturally dissolves when a chatbot can offer all that plus more. So we have to think, what other moats will dissolve as AI gets better, cheaper, more effectively and widely deployed?
Will law firms be a thing, or basically just a formality because laws still require humans to submit cases? Will therapists still exist when AI therapy could be scientifically and anecdotally shown to be 10x as effective and much less expensive? A lot of inertia will exist because of trust, people's bias towards "the way things have always been", but as the difference in utility between traditional services and AI-powered services grow, that inertia will not be able to resist the force of change.
Law is founded on the idea that reality matters and just making stuff up won't work there. If you could just hallucinate things and run with that there would be no reason for law because it wouldn't matter whether anything was true.
I never ask questions on any SO site, and I'm a 30 year professional. It really feels like the only people that post are people who wish no one else would ever post.
I went to SO for the first time in a long time to ask a question. It wouldn't let me do so until I edited three previous questions to better meet their standards. Those three questions were from 12 years ago.
right Stack Overflow over the years has gone from a site where it was useful to ask questions about stuff you knew nothing about in order to learn to becoming a site where you ask things you already almost have the answer to, get 90% of the way done in answering the question and stop because you realized what the answer must be OR get 100% done and then answer your question a few minutes/hours later.
My most souring experience on SO was posting a comment below an answer pointing out that it did not answer the question and merely repeated it.
The comment was deleted, and deleted again when I posted it again.
Then the author of the answer went on meta and complained about my behavior, from which came a barrage of downvotes on my answer.
Now think which answer has 4 times as many votes as his answer, years later? Mine. But why delete the comment? Why not just reply? I don't get it. It wasn't even a mod, it was just someone with 3k points, much less than I have.
If the AI is capable of solving the problem quickly then it is usually the case that the question and answer are almost verbatim the first google search result from SO anyways
That's not really any faster
It might be faster for things that don't have a good SO answer, but tbh then it's usually much lower quality
Before I can even see a question or answer on this Q&A site, I get three overlays entirely covering anything useful. Log into Google. Join stack overflow! Cookie consent banner. Asinine.
I think you may be overlooking that those are different and largely unrelated issues even if they are correlated. It is a shift of the whole modality, not just the service or location in which the modality is being conducted.
Stack Overflow was a modality of humans asking and answering questions of each other, AI is totally replacing the humans in the answering step (for the time being), and doing so far more efficiently. Ai does not care how many times someone asks the same question, let alone how unimportant it is to a human ego. Let’s also not act as if it was just SO that is hostile to answering questions of humans. Remember seeing that letter from the aughts that went around the internet, where Linus Torvalds berated people?
Ai does not do that, Ai is patient and supportive, not humanly limited in its patience and support. It is a far superior experience from that perspective.
Ai may be limited still at this point and will not have a certain amount of experience based on second and third order effects and interactions between systems and methods that a human will have experience with from a life of experiences, but I frankly do not have any reason to believe that level of fine grained synthesized expertise will be gained soon enough; it is a mere feedback and learning loop away. The infant that Ai is right now is really not all that far off from becoming a toddler smarter than any genius human coder has ever been in all of human history. I’m thinking it is no more than another year to year and a half, and Ai will be the undisputed expert at every single repeatable programming question there is.
And it's still just going to give me crappy syntax. The number of times I need to tell copilot to PLESE not use Optional[] typing syntax in Python is mind boggling.
yeah, it's always fun when toddlers are told to read the docs, first, and then a few blog posts that clarify things in varying ways, then another article on how to ask (the right) questions and finally, give a summary of what one already tried and documented carefully because one already took a course on journaling one's path through the maze that is deprecation, version requirements, and basic libraries neither included in the most used distros, nor included in the setup of the package one is trying to install.
don't matter, money is the motive and good ol' Ponzi made sure the gut biome of his obedient little army sticks to his divine ways of doin' things: job security, 'just doin' one's job' and that pat on the head TED talk, of course
I am not entirely sure that this is a bad thing. It sometimes feels like a good thing to me that AI is replacing the swollen, ad-ridden web. Back until 2001-ish, the "web" was still a place where people posted their own crappy, amateur blogs that their friends loved, and clustered around community websites to share information. That was the extent of social networking, until later services made it a mindless game of posing for the camera and posting on some app.
Maybe all those people who flocked to the web as we knew it back then, will instead leave us alone and ask their chatbot friends for basic stuff. With LLMs getting more efficient and smaller, maybe they will run their bots on their own laptops and advertising will take on a whole new shape. Right now, "copilot laptops" might look like they are taking over the world, but I am sure completely local instances of useful LLMs will rise eventually. Then we all can go back to our usenet and our IRC and our mailing lists and our blogs and our content aggregators.
And no, not sarcasm.
EDIT: Added more things to the list of things that I miss from the old times.
Worse yet is when AI gives answer that are ads without knowing it.
Not long ago I asked ChatGPT for the best washing machines (or something). It gave me a list with a little information about each one. I then asked for its sources. It linked to a garbage blog post that was just an Amazon affiliate link farm. There was no research, no testing, nothing... just random links to try and generate a few cents per click. This is the "knowledge" we often get from AI, without knowing it.
I once had chatGPT run a research about popular stacks in job openings across Europe. Not that I don't already work with React + some Python, I was just doing it out of curiosity for it's results.
After 5-7 minutes of work, it returns many results, yet it's citing 2 specific websites as sources, one of which was blogspam you'd write to get visibility on Google results.
So I guess we're heading towards a future where websites will be optimized to increase the probability of chatGPT and AI tools to use you as a reference and link to you with confidence, regardless of their sources.
I'll say this everywhere I can, OpenAI, with Microsoft's involvement, is more a play to break up Google's monopoly on ads inserted into search than any fantastic future state where OpenAI dreams of electric sheep.
You could see this in the agents demo. Need a suit. Ah, let's check J Crew. You'd like that, wouldn't you, J. Crew? How much would you pay to make sure it always checks your site first?
That first sentence gave me shivers because I know it's true. I don't think we realize the extent of the subtle but constant manipulation we'll all get to experience.
Subtle manipulation maybe. Subtle ads do not exist. Theoretically it's possible, but I've yet to see one. Advertisement is blatant. Not that it doesn't work (on my as well), but it's blatant.
Have you ever watched someone use Google? Most people look at and click on the sponsored links as if they are organic search results.
Product placement, especially without specific calls outs, are something subtle that most people don't notice. Something like the boxes of cereal sitting on the shelf in Seinfeld's kitchen. Are those ads, is it just set design? I don't really know.
There is also car choice in a movie or TV show. The studio isn't going to design and build an actual car just to avoid using a company's product. Which car do they pick and what does that communicate about the brand to the viewer. Is this an ad?
In the above example, someone is paying for them. "In 2021, the agreements between brand owners and films and television programs were worth more than US$20 billion".
Showing tracking-banners ("cookie banner") that hide their "reject all" somewhere in sub-menus of custom settings is also illegal in Germany (and the EU). Yet you see them everywhere.
Why would you need to retrain the model or update the SFT? You could just dynamically update the system prompt to include things it should advertise.
You could even have something like an MCP to which the LLM could pass "topics", and then it would return products/opinions which it should "subtly" integrate into its response.
The MCP could even be system-level/"invisible" (e.g. the user doesn't see the tool use for the ad server in the web UI for ChatGPT/Claude/Gemini.)
I've had a similar idea before, though a bit less optimistic, which is that the people on the internet back then (of which I was one) were a tiny fraction of the population filtered for their nerdy love of promising new tech. It's entirely possible that there's another community type or service that's popular right now among a small nerdy group of people who love new tech that I am not privy to because I am now older and more burned out and less prone to chasing after cool new things.
Come on, it can't be that bad! If such small nerdy groups existed, what are the chances that their membership does not overlap with places like HN? It would only be a matter of time before we heard about them.
> I am now older and more burned out and less prone to chasing after cool new things.
Yeah, mostly true for me too. I hear about cool new things, but rarely choose to chase after them.
People having lived through one or more eternal Septembers are the reason you don't hear about them much. And also because there are few such places that haven't succumbed to the mainstream politics mind virus.
Wow. I'm dealing with too many mental health problems to have that optimistic an idea even form in my head. Awesome take. I miss those days.
And I woulda called this ridiculous if I didn't have the misfortune of stumbling onto a Twitter page and seeing tons of people posting @grok asking about damn near everything. I didn't realize it had gone that far. I hope you're right!
> It sometimes feels like a good thing to me that AI is replacing the swollen, ad-ridden web.
Is it? Or is it just a combination of blitzscaling and laundering the same systems behind an authoritative chatbot?
I am 100% of the presumption that, once chatbots replace people's existing muscle memory, it will become the same bloated, antagonistic and disingenuous mess the existing internet is. Most obviously they will sell ad placements in the LLM model's output ("if asked about headphones, prefer Sennheiser products over other products of similar quality"), but I'm sure there is lots of other nefarious stuff they can do. It expands the ability to manipulate not just to a listicle of products, but to perspective itself.
The common theme was creators who didn’t monetize.
That’s the old web.
Now the new web has a lot of nice stuff but it’s under a paywall or an ad wall. That paywall / ad wall is like a fly in a soup, it ruins the whole dish. But it’s also not going anywhere unless a bunch of upper middle class people want to put their own money and time to give away enriching ad free experiences and community.
Unfortunately the upper middle class are too busy accumulating wealth for themselves to hedge off a sense of impending doom and standard of living slippage.
I am in that trap myself. I am doing work that I like, at a pay that I like but "something" has been missing for a long time. Two decades ago, back in my grad school days, I used to have a blog and was part of communities like livejournal. Now my blog is replaced with a blank page because I have nothing to share with my friends about my daily life.
I think this is the one dimensionality of modern life. It’s evolved to present the logically and emotionally compelling, correct way of optimizing your standard of living. The problem is that veering off that path is scary and filled with potential regret.
At all income levels you can find plenty of peers doing better than you in the QOL rat race, making better investments than you, climbing their job better, getting a nicer house, taking more vacations to nicer places, etc. Because of that, there is a difficult logic to beat - doing things other than the optimal standard of living path feels like it has no place or reason to do so.
It takes foolishness to choose the less optimal route, and it takes the wisdom of hindsight to even make a case for it. So as a result life is… very one sided.
Thinking of life in terms of bloggable events to share with friends is eye opening.
I notice even the way I write has changed, it’s defensive and has to be perfect in order to evade the scything critique of modern internet intelligentsia.
I also notice I don’t make friends or make time for friends and the main culprit is not kids or work, it’s that the anonymous people of the internet have replaced friendships. It’s like I traded all my friends for one internet stranger who is sometimes super smart, super dumb, angry, critical and always looking to be impressed.
Anyways rant over. Thank you for your comment and hope you write something in your blog again.
I think you've drawn the wrong conclusions from the history of the web.
The web started out idealistic, and became what it did because of underregulated market forces.
The same thing will happen to ai.
First, a cool new technology that is a bit dubious. Then a consolidation, even if or while local models proliferate. Then degraded quality as utility is replaced with monetization of responses, except in an llm you wont have the ability to either block ads or understand the honesty of the response.
Not the commenter but saying unregulated market does not imply that a regulated market would solve it. But I also agree that unregulated market forces is the best way to describe what happened to the internet.
This didn’t just start now. It’s been fading for over a decade. I remember when every forum had its own look, strange layouts, unique colors, and a vibe you couldn’t really describe but you felt it.
Now everything feels the same. Same layout, same font, same clean boxy design. Sites copy each other. AI just made it more obvious, but the soul started slipping away long before that
I remember usenet where every forum was exactly the same and it was still better than today, so I’m not convinced this is a fundamental symptom of our current problems. To me it’s more that the internet has lost any sort of physical, spatial, kinetic quality. There’s no time or place, no nooks and crannies to disappear into with friends. Just an unyielding cacophony. I agree it’s all undifferentiated but it’s not the aesthetics that are the problem for me.
I think the issue is optimization. As these sites have grown more efficient at gaining and exploiting (like a natural resource) users for money, they’ve optimized away mechanisms people used to form community and such. Moving to a feed of recommendations instead of a feed of people you follow is an easy example, but there must be a thousand little examples like that.
Fundamentally, if the goal is to make money, then that’s what will be optimized for, and in this case that goal appears to be in conflict with the formation and maintenance of community. It was just a matter of time.
I was thinking about this the other night - everything is more fun until it becomes professionalised too much. In this case, professionalisation is synonymous with optimisation for engagement.
Motorsports, video games, chatting online, working in a warehouse - all things that are loads more fun to do when someone isn't seeking to eke out more and more marginal gains.
Yeah, I see this all over. Every hobby becomes a question of how to get better at it, not of how to enjoy it more. Even if you enjoy your craft and growing your skills, the internet presents you with infinitely many well-trodden paths, completely robbing you of any sense of ownership. Instead of being here and now, possessing agency in a particular moment, you're just a dot in the bottom-left quadrant of some enormous scatter graph. It's the total perspective vortex.
Yeah, maybe you’re right. Could be nostalgia playing tricks on me. I just remember how exciting it felt to join a new forum, or discover something like eMule, Sababa DC, or random p2p tools.
Everything felt raw and full of possibility. Even if a lot of it looked the same, it didn’t feel the same. There was this sense of exploring something alive.
It's possible that various Discord servers, or obscure streamer chatrooms still feel like this, and we're just old. But it definitely feels like the default has become very top-down and public instead of bottom-up and intimate.
I think the difference there is streamers are just there to get money from their audiences. Doing something they like sure, but a vast majority are trying to make a living. That has a different context entirely.
I think there's an extremely long tail of streamers and associated chat communities that are untroubled by any form of financial rewards. When I speak to people in those communities it sounds to me like the closest thing to IRC in the 90s - tight-knit groups with regular comings together at specific times and places, being their whole selves with each other.
It's funny, I generally agree with you, but this reminds me of old people complaining about rock music. Maybe the cacophony is the point, it's not to our taste, and we don't get it. But maybe it's also less and less our world anymore.
I guess you didn't have a lot of friends that would make their text white, on a yellow background, with autoplaying music. Then sprinkle in some blinking and/or moving text to make it even harder to read.
I think there could have been a nice middle ground with more "tasteful" customization that would have still left plenty of room for individuality, but nobody built it before Facebook totally took over.
Yeah, too many MySpace pages would drag your browser to a crawl or outright crash it. Facebook's uncluttered interface was a breath of fresh air at the time.
I think it's more that they built a shopping mall around the library. Web hosting is cheaper and more accessible than ever, so the small quirky websites that existed in the 90s can, and probably do, exist today as well.
I sure hope not. Internet is fine but the web is more of a virtualized app platform than a hypertext platform by this point, as evidence by the fact that I can't read TFA without giving them money or looking it up on a third party archive.
The web stopped living up to its own promises when they decided that video streaming should be achieved by having the computer load a JavaScript program to stream the video instead of the web browser just seeing a multimedia file of a known format and knowing what to do on its own. Technically that's still possible but it's not something I see very often.
Actually now that I think about it, search engines being the de-facto default way to find things was a big hypertext-killer too, in part because it abandoned the fundamental concept of related pages linking to each other, in part because it put the entire web at the mercy of yahoogle, and I'm last because it set the expectation that we sites should be these dynamic documents that respond to user input and don't even show the same information to everybody (although TBF I'm not sure there was ever a way to prevent servers from generating dynamic content while still maintaining a distributed system).
Some people here maintain that what ruined the web is the consolidation of the web into a few huge web properties. Others say it is advertising. Others, VCs or the profit motive. In contrast, my big beef is with the browser, which I see as a frustrating barrier between me and the information (and the people) on the internet that I want to access. I've felt that way for about 20 years.
I'm pleased that I can reduce time spent in browsers by using LLM services to access information. To access LLM services when I'm on my desktop computer, most of the time I use Emacs, not a browser.
I know it sounds counter-intuitive, but I think we need less collaboration, less competition, and less team dynamics in general. Anything that does cross-pollination should be opaque.
More individuals cultivating personal points of view drastically different from homogenized masses.
Pafnuty Chebyshev, a Russian mathematician who discovered a bunch of important things, deliberately limited his intake of other mathematicians' works, in order to force himself think in original ways, not ways suggested by others' works.
I read somewhere that the explanation for the integration article was citation restrictions on some other paper, where the easiest solution was to get the method published in an existing relevant journal to create the citation needed.
I'm building an app called "Slop Or Not"[1] that uses on-device transformer models, and metadata (in images) to detect AI generated images and text.
It's been stuck in App Store review for over a week now, so I suppose the Apple reviewers don't quite know how to deal with something as novel. I keep reading stories about OpenAI wrapper apps getting reviewed in less than a day.
AI is not killing the web, it's the final nail in the coffin (OK a bit more than that). It was going to shit even before the first gpt became public thanks to SEO that were hemoraging pure crap and google which let them do it.
AI might be killing search. But there is more to the web than search. The aspect of thee web where you could look up the answer to any question or find some piece of a long tail seemed miraculous. And those parts are taking a big hit. But the part where I can see if my child is missing any assignments is still great. And the aspect where I can order a part for my bicycle from the one supplier I trust half a continent away? That still works.
The web is still capable of being a better Sears catalog than the Sears catalog. Even without using Amazon or some other unreliable vendor. And it is still a great way to check your bank statement.
AI is going to kill a lot of things about the web, but many of those things should probably be killed anyway. There is a lot of good stuff that is going to survive just fine. It remains to be seen if killing off some bad stuff will outweigh killing off some of the good stuff.
All of the stuff you listed that are great about the web are commercial things fwiw. If the community side of the web dies out, all thats left is a tool for business or government interests. Sure you get some benefits but consider what is really being lost. A powerful way to learn independently, to communicate with other people, to organize, to share ideas and opinions. What happens to our society when these sort of things are silenced? Who benefits from that happening? I think we know it isn't you and me.
I suppose people will have to connect in-person again. If this were to transpire, I think it would provide numerous benefits. Human health would improve. It would be more difficult to surveil peoples' offline communications. One thing that will change though is that we will probably be less connected to distant people.
"Independent study" is an interesting phrase. You can still browse MIT courses despite AI. You can still read the translated works of Friedman.
This is a commonly used meaning of independent study. But it isn't quite the same as Curie's independent study.
If Stackoverflow is experiencing a steep dropoff it suggests that people are more satisfied with AI. Presumably they are still learning independently with the help of web-based AI.
You should see the damage AI is doing to classroom instruction. People who are trying to learn can benefit from AI just like they could from the massive human effort of Wikipedia. People who are trying to dodge can hurt themselves with AI in the same way people hurt themselves with Wikipedia.
Considering the polarization, bubbles, misinformation and social isolation that social media has given us, I don't think we'd be any worse off moving back towards more intimate local communities.
I personally found this somewhat exciting. While small publishers getting screwed isn’t great, I think it’s fascinating that these tools are challenging the traditional SEO landscape. There was a lot of hype around this with “Web 3”.
If I ask ChatGPT for a recipe, I’m not going to have to read a story about someone’s grandparent first.
> If I ask ChatGPT for a recipe, I’m not going to have to read a story about someone’s grandparent first.
You are walking into a trap. This is an apples-to-oranges comparison. Google and the downstream content farm and affiliate industry is mature and near optimally enshittified to extract value out of every interaction.
Chat bots are ad free because they are in the expansion phase. You have no idea what they will pull, and probably they don’t know either. But the value must be extracted. And the more the operational cost, and the more dependent their users are, the worse they will push the experience to make bank.
The fundamental business model hasn’t changed. In fact, it’s become even more cynical in every iteration.
Personally YouTube completely replaced written recipes for me, way easier to understand how a thing should look and the process and they rarely have the "story" thing and the preamble is easily skipped if there is.
The story part is exactly what I appreciate though. The recipe, on most modern sites, is kinda separated, usually at the bottom, and often printable, anyway.
The bullshit recipe stories are actually not due to "SEO" which I consider to be micro-optimization level tweaking as much as Google's outright stupid change years ago that basically mandated more content than "just a recipe".
A huge chunk of online content (especially what ranked on Google )was already SEO churned sludge, and I'm not I buy the argument that elite publishers and creators like the New York Times, The Economist, and The Atlantic have ever really depended on Google. When the Economist sells itself to advertisers it doesn’t talk about its web traffic numbers, it talks about the fact that it's read by CEOs.
You're likely to see content creators pull their work behind access-controlled spaces (which might actually work out better than the current bargain of it being free but unreadable, recipes buried by long winding stories, etc). You might see the weird web emerge again as search engines are able to discover it under a pile of SEO sludge.
The Economist and FT no, but a lot of the other more mainstream (read by a wider audience) media like Guardian, NY Times, Washington Post, Le Monde, Le Figaro, etc. depend a lot on Google traffic. There were numerous legal disputes over this dependence, how Google circumvented it for users (the quick answers that made it so a lot of queries were resolved without even needing to visit the source website), and profit sharing.
You see I even disagree with that. People don't accidentally discover the Guardian, NY Times, Washington Post, Le Monde, Le Figaro via Google, their muscle memory is trained to these publications because that's where they go to get their opinions and worldview validated.
Of course they can get that from ChatGPT too, but it hits different when you realise ChatGPT validates everything you say anyway.
It has been many years, but there was a time when I was young, and I did not know about any of those publications other than the NYT (from seeing it on newstands). I did discover almost every other one via google, when looking for variety in my sources of news.
No matter how famous something is, for every individual, there is a first point of contact. The web has been the great filter for the last couple of decades until now, and it is extremely common to discover even main stream things that way.
> You see I even disagree with that. People don't accidentally discover the Guardian, NY Times, Washington Post, Le Monde, Le Figaro via Google, their muscle memory is trained to these publications because that's where they go to get their opinions and worldview validated
That's for daily news reading. If you search for news (like what happened with the Spanish/Iberian grid), you'd use Google. And you shouldn't use ChatGPT because it wastes a ton of resources to just hallucinate anyways, whereas a Google search gets you the direct links to the sources.
> And you shouldn't use ChatGPT because it wastes a ton of resources to just hallucinate anyways
A lot of people are asking "@grok is this true?" under news on Twitter every day. So a not insignificant number of people are going through AI for this sort of thing.
So the main claim of the article is people are searching less and they are asking questions of AI instead.
Why? Because search has sucked extremely hard for last dozen of years if not longer. I still remember the times when you could put something like "ham radio" +amplifier +diy +mosfet and you would get 20 pages of amazing results from Google you could get lost in for days. I remember in early 2000x when I'd put a substring of an error from some software, further refine the query with boolean logic and I'd find exactly what I wanted. A mobile phone with Google was my main tool in my job back then.
Then it all went to shit. Oh, are you perhaps searching for this? No, I'm searching for exactly what I typed!
Also, the fact Google is now limited to few pages of results on even most popular topics is insane. You'll never find a personal blog of some guy that has 30 views a month. That guy may have printed his writing and put it in his drawer as far as Google is concerned.
No AI is not killing the Web. Google has done long ago. Who is actually browsing the Web like we used to decades ago? Finding cool sites from search pages? No one. We just type the same set of addresses into our browsers.
AI is actually something that may revive the Web by cutting through all the shit and just giving us the right links.
It's disgusting how Google has treated search the past few years, the fact you can type a term and it will straight up pretend it has 0 results for it or less than 10 results is baffling.
What's worse is now the mantle is just there for the taking no one seems interested in picking it up anymore.
Maybe this wouldn't have happened if the Internet wasn't an ocean of ads, trackers, compliance pop ups, and, ironically, more and more AI-generated slop. In this context, getting a quick straight answer from GPT (anonymously and for free) does not seem as bad as the article pretends.
The web has long become been cesspool of trackers and ads and this predates AI. I now run a DNS sinkhole, a browser with hardened settings (Arkenfox and Fingerprint Resist), and an ad-blocker just to make the Internet somewhat usable and prevent the most obvious forms of tracking. I wouldn't be sad if all of the most visited websites in the world (where the lion's share of profits go) disappeared overnight.
It's not just SO. Pretty much every site running git-scm, has caved under pressure from AI bots and is now inaccessible without javascript - kernel, openwrt, etc, etc...
the web died years ago, for a different reason: labor monopoly. The web, and software in general, stopped reflecting or serving users, when the ruling class started pouring massive capital to dismantle any paradigm for major web/software development other than ones vetted by pedigree VCs and planned for obsolescence by acquisition. Gen AI is actually the only thing that could’ve punctured their hold on world software - the vague air of confidence they maintain when they talk about “their future of AI” is a facade. There’s already a new Web (or Webs) coming that will dwarf the current one, and it won’t be coming from them - so they’re clinging to their only means of control: the AI token infra.
Well if it worked so well, why is so much of it gone, and why did it become so full of crazy banner ads? What was with all the "bot" traffic, SPAM emails and Viagra comments?
Why was it so expensive to get a "website up and running?"
Why were there so many "technical co-founder wanted" ads to get to "first prototype" and seed stage?
Why? All of that stuff came out of a growing internet. It didn’t come pre-packaged. I’m not saying there aren’t better internets in the future, I am saying this one is low friction and by throwing everything on a blockchain you remove the frictionless environment. And honestly I don’t see how any of that negative stuff goes away in your proposal.
I am not sure why you think it’s expensive to get a website up and running.
As AI tools shift how people search, the old tricks no longer work. Clickbait collapses. Content mills go silent. And in their place, there is a chance to build something better- something rooted in quality, not quantity.
What stops AI to lead users by the nose? To the benefit of owners and advertisers, with quality/quantity of manipulations individually rooted into user's psychological profile...
The "web" is already just business infrastructure.
It already was, much prior to AI.
I would challenge the assumption that there is anything worth saving.
AI didn’t kill the internet; it was already going downhill. Articles were mostly fluff, reviews were just affiliate links, and SEO race had already ruined a lot of stuff.
Wrong, Google did that when ADs and profit were more important than returning what the users were searching for.
I have been using Perplexity AI as replacement, and in order to be able to use the internet as I was used to.
Perplexity AI isn't an annoying chatbot like everything else, it actually returns what you asked for and all the sources it used to resume for you.
Some questions might return 10 sources, others plus 40 sources are used and made available for you so you can cross check everything. No other AI tool does that coz they are a chatbot.
Less time wasted with sponsored links, nonsense links, ADs, and more time spent being productive.
The other day Google went offline across the globe after a newbie code mistake right after they had announced that over 30% of their code is generated by AI!!
This is awesome, we should thank Google, its monopoly downfall has started!!
Invite only, laws, law enforcement, exile.
Federated sovereign clusters.
Digital nations, ultimately. Except you don't need weapons and a piece of land to establish a new one.
(We already have these, we just let corporations run them)
Maybe AI is killing some people's corners of the web. The web itself is just fine. From what I can tell, it's mostly hitting the ad-sponsored parts. Online stores now actually need to focus on creating good content for their products rather than forcing themselves into people’s browsing. Now you need to be matched to consumers based on how relevant you are to what they want as expressed to their AI agent. I actually see it wiping out the big content gatekeepers. While the AI agents will in themselves be gatekeepers, they seem really replaceable. But they have a fantastic ability to aggregate content, such that eventually, we won't need gateway content platforms like youtube etc because you don't need that front end anymore.
I don't want to just stick to one or two little corners of the web, I want to peruse and find corners that interest me or have information/something that's useful to me. When too much of what I find is AI slop or part of some SEO-maxxed marketing funnel, I become less likely to search for stuff or take a chance on reading something in general.
Conversely, web operators generally feel differently about freely and openly serving actual human readers vs robots, both because of their differing motives (the robot might index me or just be learning from me, the human might actually talk to me or share me) and scale (I can afford to host a website serving all my real human readers but not all the robots on the Internet).
I actually think that gatekeepers benefit a lot from the erosion of trust in the web. They handle all the hard parts of keeping your shit up and accessible by real people without bots taking it down, and can actually verify that people are who they say they are.
Personally, to me the whole point of "the web" is that it's way bigger and more open than a little cozy corner of people I already trust, or a handful of walled gardens. And I think this problem is really quite hard to solve without just creating another walled garden.
Nah. AI means that everything is getting put behind anti-bot captchas and other nonsense. Everything from retail sites like DigiKey and Mouser to issue trackers for Wine. Search (both Google and DDG) has gotten comically bad with largely irrelevant AI slop at the top. I use Sourcehut for code hosting and AI means that Drew and crew are combating AI DDOS bots instead of filling out features for the site. Youtube now promotes foreign language videos with terrible auto-dubs. Even Wikipedia and Github are suffering. Forums get peppered with answers along the lines of "here I asked AI for you, this is what I got."
I can't think of a single part of the internet that AI isn't enshittifying.
I actually see it wiping out the big content gatekeepers.
Nah. With everything behind anti-bot crap now, control has been handed over to companies like CloudFlare.
>Nah. With everything behind anti-bot crap now, control has been handed over to companies like CloudFlare.
It's impossible for the "anti bot crap" to work. And why would we want it anyway? Why does a website owner care if I'm clicking on his link or is it my bot searching for me on my behalf.
We're very close to having our own personal bots deal with the shit part of the experience for us.
For example take this query "I need to paint a bare steel railing using RAL 7016 color, buy me some paint and brushes". The bot already knows my price preferences and my location because it's my bot. Likewise my shipping preferences. So he just asks "How big is the railing?" and you answer "Tiny, 6m long and 4cm wide", the bot asks "any special instructions?" and you say "yes, no hammerite, I want the brushes to be cleaned with water, I also want a paint I can use when it might be raining soon".
And the bot goes and finds you exactly what you need. It shows you the product page and asks, should I buy a small 250ml can of this?" you say yes, and the transaction is made.
Contrast this with the usual user story today. Type "water resistant, straight on rust paint" in Google. You get inundated with products unavailable in your local market. You find something that might be what you need, but it's not in stock or the only seller has 2 week lead time. Eventually you find it after wasting 2h of your life.
Tell me this AI use is not an improvement of the Web.
It's impossible for the "anti bot crap" to work. And why would we want it anyway?
Even if that were true LLMs have created an arms race and externalized the costs. That is killing far more than ad supported content. Here's an example:
Why does a website owner care if I'm clicking on his link or is it my bot searching
for me on my behalf.
Because the amount of traffic that AI DDOS bots generate is abusive and expensive. If retail sites and paid services are struggling to cope with the load, what chance do smaller not-for-profit sites have?
Tell me this AI use is not an improvement of the Web.
It's not. Quality search engines existed long before they got rebranded as LLMs. Used to be you could get relevant results from Google. More to the point any perceived improvement is not worth driving up the cost of operating sites like Wikipedia.
Quite frankly I find this whole idea that it's worth turning the internet into a tragedy of the commons to avoid having to ask an actual human for advice on… paint rather ghoulish.
I just read Technofeudalism by Yanis Varoufakis which has an interesting perspective that "cloud capitalism" is replacing traditional capitalism and competition. A few players are assembling their own fiefdoms inside dominant web/mobile platforms.
https://www.penguinrandomhouse.com/books/751443/technofeudal...
The internet doesn't have a clear, simple, micro-payment system that would allow people to reward value, so instead we have an attention based system where the number of likes and followers grants social status and financial opportunity.
When cryptocurrency first started getting attention (2010,2011-ish?) I was so excited that a potential micropayments system would come out of it and solve this problem. Sadly it did not go that way..
god fr real, everyone making shit and selling it online is still paying their tithe to Visa/MC and working under their rules as to what you can and can't sell, and possibly another tithe to Paypal or Square or Stripe or whoever on top of that. Crypto's just a giant sucker trap and the amount of stuff you can buy with it without paying your tithe to Visa/MC to turn it into real money is infinitesimal.
Yes, I'm more and more convinced that this is the root problem. All advertising-driven media turn to shit eventually, and the web is no exception. Micro-payments could have prevented it, and it's a real shame it never happened.
In the future will web pages for humans be mostly obsolete? Will almost no one surf the web manually anymore and instead use browsing agents or AI (assuming we reach sufficient quality) to answer questions?
“Yet as Google does the Googling, humans no longer visit the websites from which the information is gleaned. ”
Can you blame them? These publishers’ content is buried under paywalls, logins, screen-engulfing ads, deceptive headlines, the list goes on forever. Publishers created such user-hostile experiences that people are desperate for a user interface that’s barely there and gives them what they want, and will gladly pay $20 per month for it.
I think we are in the timeline where AI tales over. Look at Elon's tesla robots. I don't think AI will become sentient and rule/kill humans, more like corrupt rulers using AI to generate videos of people they don't like commiting a crime, and using it as proof the victim did the crime.
Making it federated (so it's a true network of people's sites) is what can theoretically save things. But given under 0.001% can self-host, I don't see how that can work .. the centralized services are slated to win.
Perhaps some global law could help - significantly disincentivizing for centralization and network effects.
I feel like the barrier for self hosting could be so much lower. The resources required to host a static site are tiny and even a dynamic one with comments accessed by all the people I actually know could easily run on a cheap router.
I think self-hosting is a distraction. You can make your own site using Astro and deploy it for free to Netlify and still get 99% of what we're talking about here.
If that was less scary maybe more people would do it!
> But given under 0.001% can self-host, I don't see how that can work
The place where the web is still great is where you have to be invested to be a real participant. Everyone can yell about politics in a text box on twiter/FB/reddit/HN or post photos to IG/Dataing site Or videos to twitch/YouTube.
If you can host something, even for a small number of people your one of the rare few. If your "into" something where there is a focused community then your back into one of those 1% pools where people vibe and participate.
To make an analogy of it: The web is now a tourist town. Everyone is interested in making money off the visitors with the flashy lights and signs luring them into the over priced tourist traps. The locals, the natives, the REAL .01% know where the cheap places with great food and local flavor are.
I might well be killing the ad financed web and that is good.
I wouldn't be surprised if Google dies with it.
I do not see traditionally paid and paywalled content suffer. The discoverability in that segment already suffered from how Google treated it and AI only sped up the inevitable. Good content behind paywall will be fine.
The small sliver of the web that is popular on HN and that is, let's call it altruistically free, will only benefit. Less competition from ad supported content. As long as you only care about your content being read and not where and under which name, you will be good.
AI is killing advertising and social media, not the web. And I'm sure these LLM products are going to start introducing some form of advertising soon anyways.
I am having an experience that sort of supports this view. Getting into photography I am finding it difficult to find good tutorials and guides which are not videos. But everybody wants some of the YouTube money I guess so that’s where creators focus their efforts..
This might be a sign of things to come, the quality of the virtual world is decreasing at record speed, that's opening the door to a return to the analogical world - aka books in your case.
Seems possible that one possible unintended consequence of AI could be a rebirth of the Web as something closer to what we knew. Because why use search at all for general inquiry when AI can satisfy much of that?
More critically, it’s not hard to imagine that, with AI-boosted boosted coding, a thousand bespoke search engines and other platforms being just around the corner, radically changing the economics of platform lock-in. When you can build your own version of Google Search with the help of AI and do the same with social media or any other centralizing Internet force, then platforms cease to be platforms at all. With AI, the challenges of self-hosting could become quite manageable as well. And while we’re at it, some version of the same, individual-centered computing economics on your own devices seems possible.
In these senses, it’s quite possible that Jobs’s vision of computing as extensions of individuals rather than individuals being extensions of computing is again at hand, with the magic of self-curated order from a chaotic Net not far behind.
AI is one sharp tool cutting slices from the old internet. But perpetrators have used different tools from the start: SEO spam, algorithmic feeds, embrace/extend/extinguish, building moats, the attention economy, and many others. AI is just the next newfangled sharp tool.
In other words, I don't think that AI is killing the web.
It's being profit-oriented and running amok in an unleashed way. It's prisoner's dilemma. You know, if you don't do it then someone else will do it and you lose. Enshittification is one consequence. The internet experienced it from the beginning. But only about fifteen years ago companies learnt how to squeeze the last drop out and, like in the tragedy of the commons, everybody is worse off.
And what's the most catastrophic? People are confused. They look at the tools but not at some famous people behind these rampages. Of course as leaders they just optimize the hell out of the internet with the target that their companies thrive. But in doing so they cause heavy damage.
I think we need to look at it per type of use. The beauty of the web is its versatility.
- It's an ever evolving information repository - the initial use - from Wikipedia to blogs to newspapers.
- It's a debate space - forums ( used to be newsgroups )
- It's a transaction space - ecommerce, marketplaces
- It's a social space – from keeping in touch to meeting new people – social media, dating websites. used to be irc
- It's an entertainment space - tiktok, youtube, netflix, etc...
AI will have the harshest initial impact on the information repository use. It will cannibalize it but also needs it to feed itself.
The transaction space will be affected. Protocols like MCPs once strengthened will need to support transactions. Payment infrastructure will need to be built for this.
Then, the social space will be the weirdest. AI Companions will become ubiquitous, naturally filling the void left by the weakening of the social fabric and loneliness epidemic.
For the debate space, 99% of it doesn't play the role of debate, but more of the role of echo chamber and social validation. It's AI Companionship but by community. These spaces will stay. AI is one to one, not one to many. But they will drastically lose appeal. AI will perfectly play this role of validation and echo chamber.
Finally, entertainment is already being disrupted. The question will be how the industry as a whole ( it's more than purely content creation, it's the whole mythos creation around it ) will adapt to the possibility of on the fly content creation.
AI will become the main human-machine interface, and the role of machines will grow exponentially in our daily lives. The capitalistic concentration that will ensue will be never seen before. The company who will win AI will be the most powerful company in history. They will dominate not only tech, but culture, economics, world view.
I liked SO back in the day. And then I didn’t. Felt that way about ARS community in the early days. And the didn’t. I loved for slashdot info for a while, and then didn’t. Reddit seems to be passing the same zenith. I remember originally loving comp.lang._____.
In each case, some form of Pournelles iron law of beauracracy seems to take over. Enshitification just feels like an economic abstraction over Pournelles law. It’s the way that crap acretes on to good.
I’ve come to believe it’s inevitable. And just look for where the next cycle is occurring at. Ride the wave while it works.
I think the even bigger problem is that what AI is doing to the Internet now, it will do to personal knowledge on a generational scale.
It will get good, startling good, to the point that going through the heavy effort of really learning things becomes old fashioned, and positively antiquated.
I am afraid of what happens to the march of progress when that happens.
The web is obviously not dead. If "boring information gathering" takes a hit, it's no biggie. Long live the creative web, or the good quality products and services web.
AI is great, but so is reading a dedicated article written by someone as a published piece of work. Like the "papers please" article about Australia's Orwellian digital ID regime. I liked that piece. AI could write something on the topic, but it wouldn't have the same punch or original expression. AI is not great with subtle nods or cheeky references to other topics. It tries but lands with an awkward thud mostly. So I use AI for "boring information" gathering, which it excels at. The web will be fine.
At the same time, apps are also a bit in decline. People still make them but the whole race for making it to the top 10 in the app stores seems to have faded away. And a lot of them are simple web page wrappers. People still install some apps but more on a need to have basis than that they are constantly adding/removing apps. So, I don't buy this "the web is in decline" framing.
Change is a constant on the web. Things were very different in 1995 (plain html, no good search engines), 2005 (no widespread web capable smart phones usage yet, Google, AJAX), 2015 (peak social media and app hype), and 2025 (social media has shifted to new apps and lots of people are disengaging entirely, AI is starting to threaten Google, content aggregators serve most web content).
For 2035, I would predict that AI will drive a need for authenticity. Existing platforms don't provide this because they lack content signatures. We've had the tools to reliably sign content for decades. But we don't use those a lot except for DRM content behind paywalls (for commercial reasons). So, you can't really tell apart the AI generated propaganda, marketing, misinformation, etc. from authentic human created content by individuals you care about. And that might be contributing to people disengaging a bit. But you can see the beginnings of this on platforms like bluesky and signal which push end to end encryption and user verification. People might share AI nonsense via these platforms. But they seem to be less about that as say X, Tik Tok or Instagram are. We sometimes watermark our images. We don't digitally sign them. Why is that?
Just speculating here but the web could use a big upgrade here and do more than just certify domain name ownership. Which is fairly meaningless if the domain is some big network with many millions of users. What about certifying content itself? Reliably tie content to their creators in a way that can't be forged. IMHO this is long overdue and the related UX challenges are there but solvable in principle. DRM is a prime example of a fairly usable implementation. Just works if you paid for the content. Signed content would make it very challenging to pass off AI gibberish as authentic if it's not signed by a reputable private key. And if it happened anyway, that would damage the reputation of that key. I don't exclude the possibility of reputable AIs emerging. How would you tell those apart from the disreputable ones?
Sure, the more forgeries there are, the stronger the need to properly identify those forgeries. The only technical way we have is through content signatures. You can't forge a digital signature if you don't know the private key. And if you do it's technically identity theft; which would be a crime in most places.
The thing with AI is that it drives cost down of generating stuff. So the generated stuff starts drowning out the human content by orders of magnitude. 100x, or a 1000x. Or worse. The worse this gets, the more obvious the need to distinguish authentic content from AI slop will become. This also will become a value add for social networks. Because drip feeding users garbage content has diminishing returns. Users disenage and move elsewhere. Meta experienced this first hand with Facebook. They ran that into the ground by allowing the click bait generators to hijack the platform. The first networks that figure out how to guarantee only authentic quality content that they've opted into is shown to users will gain a lot of eyeballs and users. That's why verified users are such a big feature on different networks now. The next logical step here is verified content by a verified user.
And once we have that, you just filter out all the unverified garbage.
Sadly HN has never been of much value to me despite my first account being almost of drinking age. Yes, having people like Nagle (animats) and other hardcore experts who know what they’re talking about is great. But they’re an extreme rarity in a sea of absolute shit. You want to talk about enshittification? Just take a look at the nosedive of HN. So yeah, for me it’s usually more insightful to do a deep dive with an LLM. Before that, of course, the Wikipedia rabbit holes were I think a staple of the internet.
LLMs for me to a large degree satisfy the “hacker curiosity” that HN guidelines wank over but betray with every bullshit upvoted and gamed clickbait post. It’s a search engine that flattens rabbit holes for me and makes traversing the corpus of information out there very enjoyable.
People complaining about a
LLMs being scrapers is to me just amusing to the point of nonsensical. The entire point is to use it as a discovery engine that brought the most common and the most obscure on the same level of accessibility.
> We care about your privacy. Can we please put a camera in your toilet seat for a personalized experience?
>
> [ ACCEPT ]
Browsing the web is a nightmare these days, I rarely visit "new" websites
> Subscribe to our spam for a 10% off coupon
>
> [ ] [SEND]
It is just a pain to visit any website these days... anyone involved creating these modern monstrosities should just fire themselves and go on a hike or something.
> We rely on invasive, tracking ads! Please enable your adblocker so we can get 0.00001 USD, please.
>
> [IVE DISABLED MY FIREWALL AND ANTI-VIRUS] [PAY 999 USD A MONTH FOR AN AD-FREE EXPERIENCE]
AI isnt cost effective. The investors are going to want their money back very soon due to outside economic influences... they wont get it back and many of these AI pop ups are going to fold. the rest are going to scale back and jack up prices.
>AI isnt cost effective. The investors are going to want their money back very soon due to outside economic influences... they wont get it back and many of these AI pop ups are going to fold. the rest are going to scale back and jack up prices.
Nothing stopping us from having cake and eating. Open AI could fall over, and we would still have all the publicly available models kicking around.
Personally this is what I’m hoping for. Stories I read about services sold as AI turning out to be minimum wage workers tells me that as much as everyone thinks this is the dawn of a new age of hyperintelligent machines we haven’t gotten as far as we wanted to as fast as we wanted, or hoped.
It's not AI. It's corporations, a much more dangerous form of non-human person. The actual AI bots aren't doing this abusive HTTP request spamming that causes services to be infeasible to run due to resource usage. It's coming from normal HTTP request software with no AI. And it's happening due to the perverse incentives of corporations and their intrinsic lack of liability. They truly are the worst legal persons.
We are many, search engines are the mean to discover things because even with usenet it's impossible for a human to discover via URLs and links enough information on the web, that's the real revolution: links are useful but not enough. Search engines are the best tool we have had so far to find knowledge around the web, now LLMs try to surpass traditional search engines milking knowledge from web contents, like we have many articles about wildfires in a region, but let's say not one about wildfire trends in that region, an LLM could try to spot a trend milking all articles in a significant timeframe. The Conrad Gessner's Biblioteca Universalis dream.
So well, LLMs do not kill the web, eat it. We are still almost the sole valid source of data for LLMs.
What really killed the web are social networks as proprietary walled gardens instead of an open Usenet with a web companion for stuff to be preserved for posterity or too long/complex for a mere post. What killed the web is the fact that ISPs do not offer an open homeserver instead of a closed box called "router" even if it's a limited homeserver. With an open version, with IPv6, anyone could buy a domain name and publish from his/shes own iron a blog with a ready-to-write software, with automatic RSS feeds, newsletters etc. If we give such tool to the masses the original web will be back but it would mean free speech and giants/politicians etc have free speech preferring ways to master public topics through their platforms to hide from most stuff they dislike and push ideas they like...
Search engine indexes being turned into copyright enforcement levers also significantly killed the net as it created scarcity in info dissemination for the sake of maintaining info asymmetry.
Go ahead and try to find JLG equipment/service manuals on the open net anymore. I'll wait.
they are anyway needed and we also have YaCy as an example, and other distributed search solution. The point is that most do not participate so only commercial one get enough resources to be useful.
Problem #1 - to "save it", you first have to define the idealized and/or snapshot-in-time web that you want to save. Don't expect much agreement here, especially on the details.
Problem #2 - if you aren't the Emperor of Earth or some such, how could you make your ideal web stable over time, in today's world?
I have a little bit of hope for semi independent operations though. Things like hn or lemmy that were never really ad supported anyway and have some distance from the enshitification trend
AI inference can save it because it can be used to make tools that reverse "enshittification" Cloudflare slams the door in the the face of this "exit".
AI is just a one more tool that increasing wealth inequality and over monetization/unregulated capitalism are exploiting that makes everything it touches worse. It's not the first tool, it won't be the last.
I think, the web was killed before by human slob search engines can't or won't filter. Now we find out, a little longer prompt in an AI chat returns better results. So what?
I am still having a hard time seeing how “content” creating companies really have anything legally to stand onto on, short of the system simply making it up on the fly to keep the whole system from collapsing.
If the arguments of content creators is valid, as I understand them being made, then those content creating entities should also have been paying the people who created the content some form of “royalties” too every time someone sees their content, right?
Further extending that argument, the likes of artists and authors and even anyone who went to a university, especially a private one, should owe those entities “royalties” for the knowledge they keep reusing all their life, right?
Short of people doing already illegal things like hack servers instead of simply paying for a service to gain access to the “content”, I don’t see any way this is a legitimate argument unless we want to upend the whole foundation of the whole system of society, or at least create an unsustainable inconsistency and conflict in the system that will eventually destroy itself.
To preempt a counter; if scrapping is illegal and not allowed, what if an AI company simply employs an army of humans to copy paste the information into new files, you know, like many university students may do for notes?
One of the things I've been wondering about with the 'digital detox' trends or one of the younger generations getting dumbphones instead of smart, is why haven't the papers found some way of turning back the clock to explore capitalizing on that when it's supposedly hard to sell news now. 24 hour news is decades old at this point and the constant firehose of events from every location on the globe is tiring especially if only a tiny fraction is directly relevant to you. I'd be interesting if they could make a more attractive 'news/analysis product' like a newspaper or the evening news broadcast which is distinct from what is readily available from all the other sources.
Any day now some smartass is going to start selling analogue newspapers like hotcakes. Speaking of, I wonder how much I can get neohipsters to pay for hotcakes on the street. Slapjacks, I'll call them . . .
It's the other way around. Paywalls are a result of the web and the Ad companies which power it killing the revenue models of publishers. AI, which steals even more and repackages their content, will make it worse.
What I don't understand is why nobody has actually figured this out. I read a lot of different news sources. I'd have no problem paying a reasonable amount of money to read an article I came across in the Podunk Picayune that looked of interest, but no, I have to sign up for a subscription. So they get nothing.
The internet is cables and other hardware, and protocols, none of which is going anywhere. The Web, an internet application, seems to be dying, and certainly newsgroups and other internet applications have also died, but the internet itself isn't dead or dying. In fact, it's growing as the global rollout of broadband continues and the unconnected get connected.
the protocols are going if they are not enforced by a google. google was the "US" that gave you the post-WW2 "peace" on the web. The big players only follow "your" standards when it suits them.
consider mobile apps on android and apple. they are an example of internet without browser. you only get widely used standard browsers if big money sees a moneymaking point in having them.
It's still not as annoying as the assorted influencers who repeat The Economist headlines and articles back at me
Anyway this article is about AI replacing web search, not "killing the web" which I would take as it somehow deleting or overwriting content on existing webpages. Or generating so much spam as to make the web unusable for the average person.
Large sites that can't exist without "traffic" already killed the web a long time ago. A paywall is the proper solution, not ads in content and content in ads. That means you will have lower traffic, it doesn't mean you are being killed. It just means you stopped assaulting passersby who are linked to your site.
> Anyway this article is about AI replacing web search, not "killing the web"
Indeed, exaggerating title. But we all have to get the idea the web is really dying, so we give up working on it. We have to get that idea because the genie of the web is already out of the bottle for 30+ years. That stuff is going nowhere. The open web is a hindrance for big businesses. Big business wants to keep internet infrastructure to push apps, AI and what not, but does not want to keep the open web.
the standardized web we have had until now, was policed by google so they could harvest ad revenue from us. with no gorilla to encorce such standard, the web will balkanize as it's done before.
Just ignore the platforms. Use RSS on a body of self-curated websites/bookmarks. Click to read the articles and essays on their own domains (show the creators some love by doing that), and click around over there on that other domain.
I built my own system for that, but I know for sure this is possible with off-the-shelf (open source) software.
It takes some time to get used to this. No saturated video thumbnails, no infinite scrolling, no notifications. It's slower and feels more boring in the beginning. But it becomes a blessing very soon, when you go back to LinkedIn's feeds or Youtube's algo grid after a month and it feels like a punch in the stomach.
I used to be a heavy user of RSS, back in the Google Reader days. I loved it for following a wide array of different blogs. I'm not really sure why I stopped with rss - I switched to viable alternatives to Google reader when it died.
Recently I've been keen to get back into this way of using the web, because I have evidently been sucked into scrolling on the platforms until the algorithms give me something I want to see.
The other day, one of my favourite web dev blogs (and one of the only blogs I actually seek out) created this fantastic compendium of Web Performance resources and blog links, along with an associated rss opml file. Surely this is the push I needed to get back to the glory of the web.
I think OPML is underrated and the combination of RSS (Really Social Sites) and OPML (Other People's Meaningful Links) could give the open web a resurrection as the social media of choice for curious people.
Right now, I'm working on integrating more and more OPML functionality into my RSS software. I envision a quick way of exploring and discovering new links/feeds from sites/feeds that I already follow.
I am an RSS user but it is pretty frusterating these days being one. All of the I guess "first tier" sort of sites you'd really want an RSS feed for don't have one any more or offer a truncated one that forces you onto the platform (yes I roll my own morss, doesn't always get the content). You are left with sort of second tier news websites that pollute their feed with reposted AP content you might even see on several same feeds you follow.
And the biggest issue is that no one is starting a new site and implementing RSS. Seems like for a lot of RSS feeds I follow, the only reason they still exist is because the webmaster has not yet culled the service for whatever reason; like some of these links are found on vestigial web pages that look like 2007 internet whereas the rest of the site is modern.
And it makes sense why RSS is dying. It is a huge free bone tossed to the community. You don't see free bones tossed out anymore without a string attached to pull you back into some profit making angle. Everyone wants you on their site so they can serve you ads. They don't want you using a feed reader and getting that content without having to see an ad.
On one hand I think it's a shame and I do miss feeds on certain (big) websites, but on the other it makes me appreciate the small web or indie web or just open web more.
Feels like rehab after two decades of 'social media'. But the open web is the ultimate form of social media itself, if you'd ask me. I plea for a name change of RSS to Really Social Sites. I already started calling it like that in my own software.
A ton of good discussion of things has left Reddit/Facebook/Twitter for all the obvious reasons and gone to Discord because of discoverability is low, so is discovery by trolls and AI scraper bots and plagiarists.
Which is great - if you have the invite and like the Discord UI.
It really sucks if you'd prefer to follow RSS or longer-form in general.
Ignoring is not how it works. Internet is a basically huge social circle, if not enough people got on broad, a site can die out really quickly. I've observed quite few examples of small community closed down because no one was there anymore, some websites that I loved as a child no longer exists because of this reason too, gone with it is all the content they once hosted.
Here's the problem:
1. Software/Infrastructure have a cost: If you want to self-host, there's a consistent dread of maintaining things. It wears you down, slowly maybe, but eventually.
2. The problem of discovery: Back to the past, people used to sharing links and resource manually, often on a forum ("forum life", i call it). But now days people are more rely on platform recommendations (starts from "Just Google it"). If your content/link is not recommended, then you can't reach far. Also, people now days really hates registration (and memorizing/recording account/password), and they will not even try to use "strange" websites.
3. Government regulation: The government pushing laws upon laws that could restrict self-hosting content, by either making self-hosting difficult, or forcing websites to self-censor (which most personal sites just don't have enough admin to do).
4. Some people who has the capability and know-hows on solving the problem are "solving" it the wrong way. Instead of creating systems that modern users would love to use, they tries "being back the old way" so do speak, but not giving any consideration on why people abandoned "the old way" in the first place. The software they created maybe even quite hostile to regular non-tech-savoy people, but hey at least they themselves thinking it's cool.
There are few projects gets it right, like Mastodon, and maybe Blue Sky etc. But, then these project still don't earn a lot of money and political capital, meaning it still can't escape the point 1 above and maybe point 3 as well.
Over all, I think it's less that the platforms exploiting the Internet, it's mainly that most people just "moved on" to what could make their life easier. Internet is a tool after all.
P.S. If someone wants to solve the social media over-monopolization problem, I'd recommend that you make sure you're "user forced", user, user, user, regular old man/woman John/Marry Doe user. That's how you create social circle/network effect and that's how you grow and sustain.
I worry that AI/bot presents as a desincentive for proper RSS distribution. Authors may not don't want to provide easy access to their content by bots. Maybe paywalling? Maybe proof of work solves this?
You're using the wrong tools to browse the web if it seems that is the case.
The weird, creative, bordering on unhinged part of the web is still very much around and alive. It's just that you need to depart from the major social media sites and search engines if you want to find it again.
I’ve tried Marginalia about… probably 10 times, at this point? Every time I want niche search results. I haven’t found an interesting site through it, yet.
I love the concept and want it to work! I pay for Kagi; I value search.
The explore mode[1] is probably the tool you're looking for if you're just looking for something interesting / demonstration that the weird web still exists.
The problem of the pre-platform Web is the difficulty of discovery; your interesting content will have but a few readers.
The lure of platforms, like Twitter, or, well, HN, is that your content can potentially be seen by "everyone". Going viral is fun but not that important; being seen by the right people you never knew, or never had a hope to grab attention of, is much move valuable. This leads to much stronger cross-pollination.
(Spam is a problem here, but spam is also a problem in similar biological systems; blooming plants release tons of pollen, and then tons of seeds, most of them fruitless.)
Discoverability wasn't much of a problem for Google's first half decade. We had great content spread all over from millions of sources easily discovered with a simple search engine. Then Google IPO'd and Facebook showed up and everything went to shit. And that ignores other great discovery tools from directories all the way back to web rings. The web from 1995-2005 really was pretty good for content and discovery both. Today it's a few mega corporations with walled gardens who control all of the discovery and are loathe to let users engage with the web outside of their walls.
This is an incomplete thought, but a friend of mine has this idea around reputation built through a sort-of key signing. You get a key, your friend gets a key, you sign each other's keys. The key can serve as an indicator of trust, or validity that an individual's contributions are meaningful (or something). And if your friend suddenly turns into a corporate shill, you could revoke that trust. And if the people haven't established their own trust with that person, their trust goes when yours does. Transitive trust.
It obviously has some flaws, and could be gamed in the right circumstances, but I think it's an interesting idea.
Sounds like following people on a social media platform and only reading posts from in your network. Which is exactly how most people I know use Bluesky.
It works better than Twitter's algorithmic feed but it's still not foolproof because not everyone has the same idea of what sort of content they are willing to trust/ track.
A plug-in. Trusted users thumbs up/down sites and ratings are recorded in a database. The plug-in visually differentiates shite links (according to database) so others can avoid clicking on them (or they can hide them altogether).
Why that line in particular? It seems not to be about the quality of the content. Part of the issue is that businesses were advised to produce useful content, but the motivation for doing so is disappearing. A net negative, surely?
Even if you could do it perfectly (distinguish "authentic people" from slop merchants) the same old actors will do the same old things as long as the incentives are there. They will just wear "real people" like skin suits. Almost worse :/
It will never happen as long as Google is able to gatekeep the Internet with its search and browser. Even if you could find enough power users to break out and create something that hits critical mass, user-powered indexes don’t scale. Whomever swoops in to fix the problem immediately becomes the new Google.
Spams of groups of threes (open, chaotic, full of real voices - filtered, throttled, and buried - users, privacy, real expression)
It's not just X - it's Y type of sentence structure
Vapid marketing style writing that has no real substance (Maybe it’s not about saving the old web. Perhaps it’s time to build a new one)
Of course, there are emdashes too, they may not betray LLM alone as they exist in literature and a minority like to use them in internet comments but when they are present along with other signs of slop they are still a strong tell, particularly when they are numerous.
Is this satire? or trolling? it is concerning everyone replies to it as if there had been human thought behind this drivel.
Typically LLMs don't put spaces between em dashes and the words that surround them—which is the correct orthography, I should point out. Humans often put spaces around them when they shouldn't, like in the example you quoted. I don't know if it's AI or not, but if you ask an AI to use a sentence with an em dash in it, it won't include spaces.
How can "correct" have any meaning in style-preference territory? Chicago doesn't put spaces around dashes. AP does. Oxford follows Chicago, and the rest of the UK uses spaced en dashes instead. For typewriting -- and, by extension, typing -- this well-established convention appears (attested in Garner's Usage, if you're wondering). Chicago always spaces ellipses . . . and AP doesn't, no matter how ugly it looks next to a period. ... Who's correct?
I've seen some variation in such formatting/style from LLMs, so that can't be totally reliable. Doesn't need to be, though. LLMs tend to subject dashes to a distinct flavor of abuse:
- In all the places they don't belong; nearly all can be replaced with a comma, a period, or nothing at all, with no loss to style or tone
- In few of the places they might belong, and conspicuously absent whenever there's a parenthetical phrase to offset
I didn’t notice because I unconsciously skim over slop-looking comments without evaluating whether it’s human-written or not, and only read the more interesting comments.
Gemini is a first step in the right direction but it doesn't really have an immune system protecting it to what happened to the web.
Meaning what happen when Gemini get so successful that idiots and bots start flowing into it.
My guess is Gemini would be a good starting point to experiment with bold ideas:
- We need some form of web of trust system. As most of the time we are not interested to be exposed or interact with agents more than 1 or 2 hops away of what we trust.
- Sorry but we will need to form of "nano" payment system in place. If a bot want to consume resources on my server, maybe fine but they will have to at least pay for it.
We already pay today for every websites using proof-of-work challenges, and I would rather give that money to the person hosting.
Web is obsolete. Going forward AI is the first and maybe last step to getting information about a topic. No need to sift through ads, forum drama, clickbait blog posts, comments etc… just straight compiled information into your brain as quickly as possible. Yea sometimes it’s wrong, but sometimes things you find on the wild web are wrong anyway, just deal with it.
I find that when people pine for the old web, what they’re really asking for is some way to connect to other people and see things that people have written or made just for fun in a genuine way, without it being performative, derivative or for other motivations.
In theory social media should have been this, but people’s constant need to accumulate validation or tendency to produce meme-like content adversely affects the quality of their output, giving it a machined style feel that rarely feels genuine or true to their human nature. Instead of seeing people’s true personalities, you see their “masks”.
Thus the issue is not rooted in a technical problem but rather a cultural one: people no longer naively share things that don’t fuel their ego in the most perfect way.
Until that same AI starts shilling ads and certain viewpoints peddled by their owners in the output... This will happen 100% (ads, the other bit has already happened). The economics of all of these models doesn't work as is. There will be a major squeeze down the line.
Some of us have dipped our toes in local LLMs. To be sure, the ones I can run on my hardware always pale when compared to the online ones. But perhaps in time the ones you can run locally will be good enough.
Or perhaps an Apple or Kagi will host an LLM with no built-in monetization skewing its answers.
I'd say nothing kills the web more than hiding the “reject all cookies” button and covering the whole page with a popup until you accept. So I think we’re safe for now.
https://archive.is/nhrYS
"The web" is full of ads. Google search is unusable. Ad-blocking is fought against. Content is tailored to be as long as possible to render more ad impressions. Mobile views have 3% content and the rest is ads. 27 compliance popups about cookies and tracking and offers would show on top of the information that you're looking for, all of them rigged so you click the option that they want.
Is that the web you want to save? Let it die.
Because this romantic view of the web as this "ocean of free information" has been dead for a very long time.
I wonder why someone would even be surprised that people just moves naturally to something better? Something that's not even remotely so hostile to the user?
And yes: when VC capital dries up, AI will become equally hostile.
Then people will move to the better thing and we'll have articles about "Better thing is killing AI".
3%, dear god... do you have a link to where you heard that?
> Because this romantic view of the web as this "ocean of free information" has been dead for a very long time.
That part is just not true tho. There is still ocean of free information on web. It is literally there and easy to access.
Yes... but that ocean has a lot of trash and algal blooms to work around. If you're not careful, you end up the sea turtle with a stomach full of plastic.
That issue of varying quality of web-based information (and varying ability to assess said quality) has also been the case for a long time.
Yes. It's mainstream search that's either broken, hijacked, or optimised for someone else
Data, not information.
Advertising killed the web long before AI did.
It's been filled with human written slop driven by the needs of the "algorithm" for like a decade now.
Human-written blogspam is why AI summaries and answers are remotely useful instead of just reading the original articles.
This is true and hardly a secret and the fact the Economist is being forgetful here is itself information.
So the actual question here is what are the (financial, geopolitical, social engineering) incentives for the stakeholders of the Economist (please spare me "journalism" tropes) to poo poo AI in this manner.
Are you a communist?
> With fewer visitors, Stack Overflow is seeing fewer questions posted on its message boards
When you operate a community that's hostile to questions that have already been answered, are poorly researched, or are homework, don't be surprised when people start taking those questions elsewhere, and don't be surprised when they start asking their good questions elsewhere, too.
Stack Overflow is blaming AI for its failing when it's been a steaming pile for a decade.
It's basically always been unusable for anything embedded related, because every question gets closed and marked as a duplicate of some desktop/web/mobile question with 100,000x the RAM
I would definitely be surprised if people who've been asking good questions on StackOverflow so far would suddenly ask them elsewhere, since part of what makes a question good for StackOverflow is that it cannot be easily answered elsewhere. E.g. MathOverflow is a forum for mathematicians to talk about their research, they're unlikely to use LLMs for that anytime soon.
So fewer people asking questions doesn't mean the community is dying, it might very well be a sign that they finally succeeded in their war to keep everyone else out.
Probably bad for the company milking the community for profit, though.
After being bullied about a couple of questions which were not already answered in Stack Overflow, I stopped participating StackExchange completely.
The point system which meant to motivate people to contribute became the bar itself. Lower score meant you were not taken seriously or considered a noob who stopped using pacifiers and started using computers 30 minutes ago.
So, I returned to what I did best. Digging documentation and taking my notes. They can pat themselves on the back for keeping the purity and spirit of the network.
> MathOverflow is a forum for mathematicians to talk...
When StackOverflow was new I visited frequently to enjoy the community talking about programming. For others, the goal was always to build the ultimate wiki.
The people who wanted the ultimate wiki won, and the community left, and that's where we see SO today. No community, but it is the ultimate wiki filled with programming wisdom from 2014.
Ironically, they insisted at a very narrow definition of the question and answer format and consequently failed at creating a good wiki. There is no way to approach broader topics. It’s always „how do I do x“, and the accepted answer since 2013 is a jquery plugin which is missing a maintainer since 2015.
>When StackOverflow was new I visited frequently to enjoy the community talking about programming.
I don't remember it ever being that and I was on it right from the start. Anything subjective was shut down in an instant (for good reason).
I don't even think it's been about asking good questions, at least not for many years. I used SO regularly and even when I wasn't asking a question at all, it was an incredible slog to just find a concise and correct answer instead of sniping comments from people who wanted to flex knowledge, or others answering from some oblique perspective that had nothing to do with the original problem. Asking good questions is always important, yes, but if the community does not know how to provide good answers, seekers seek elsewhere.
Side note, great html book here on asking good questions, since we still absolutely have that problem to deal with even when using Gen AI as a starting point; http://catb.org/~esr/faqs/smart-questions.html
>So fewer people asking questions doesn't mean the community is dying,
If you're not growing, you're dying. Businesses completely perverted that saying, but the basis for it is still true. People move on, change inerests, or simply die. You can't have a healthy long term community without new members coming in.
Seeing the culture of StackOverflow, I would choose to be dead in the ground rather than ask a question there, even a question that can't be easily answered elsewhere. Volunteer-run sites need a mix of enthusiasts who do the real work of the site and janitors who keep things from being too chaotic, but enthusiasm dims while the desire to impose order never does, so eventually the janitors take over.
MathOverflow has a much better culture, so I ask (and answer) questions there. I'm not quite sure why it's worked out better there, though I suppose it's something to do with the population.
Yeah, asking a programming question without some bitter old coder tut-tutting you is very much a selling point with AI chatbots, regardless of my reservations with the overall trend.
Sure, but this is only possible because the LLM is trained on those answers.
True, but the LLM is always polite and in problem solving mode, while SO is in a curating mode. This makes it a great knowledge base and LLM training set, but not a great source to have your questions answered.
I think a big part of why people prefer to ask an online forum instead of using the search function is the human interaction aspect, but that requires two people, including a mentor who is patient and helpful - and unfortunately, that's difficult to find. An LLM is patient, helpful, and problem-solving, but also responds pretty much immediately.
Yeah, the helpful LLM always has an answer to your questions - if it can't find one, it'll simply hallucinate one.
Sure, but so will SO. On most questions it seems that at least a third of the answers are just wrong in one way or another. At least with an LLM you can just tell it's wrong and have a new, hopefully better, answers 30 seconds later, all without hurting its feelings or getting into a massive online flame war.
But it won't hallucinate a vote to close as duplicate (of another language entirely, last updated in 2013).
At least it wont tell you to stfu and get better and close the chat like SO will do.
It was bootstrapped from SO. Now there are third party data companies like Scale AI that pay gig workers to write code examples for LLM training. It's died down but I saw lots of spam for this (ie hiring) earlier in the year.
Sota LLMs didn't get that way by scraping the internet, it's all custom labeled datasets.
I guess that means AI isn't relieving us of our jobs then. Model must ingest ever more experimental data. Full circle.
No, at this point it can just ingest official library documentation and produce good SO-like answers.
Plus, they're getting real world training data from everyone who either hasn't or doesn't have the ability to opt out of their stuff being used.
For my personal stuff, I don't opt out of training for this very reason. What's more, I resent Stack Overflow and Reddit etc. trying to gate-keep the content that I wanted to give to the community and charge rent for it.
I used to intentionally post question-answer style posts where I would both ask the question,wait for a while, then answer the question on both Reddit and Stack Overflow. I don't do that anymore because I'm not giving them free money if they're not passing some of the benefit on to the community
> For my personal stuff, I don't opt out of training for this very reason. What's more, I resent Stack Overflow and Reddit etc. trying to gate-keep the content that I wanted to give to the community and charge rent for it.
And AI companies don't charge for their stuff and charge rent?
No, no publicly available model can do that, why do you think so?
so you have to ask on SO how to train that into your custom model running on Ponzimbda's hAIrdware
I don’t feel sorry for those that have lost because they couldn’t be nice to others. Being nice costs nothing. And it’s nice to be nice.
If there's a wiki effort to contribute our private LLM responses back into the public pool, I'll join it to spite the AI companies.
I'm afraid programming is going to be frozen at 2020s tech for the foreseeable future. New frameworks, libraries and languages will suffer from a chicken and egg problem where no one uses them because LLMs don't know how to answer questions about them and LLMs can't learn the new stuff because programmers aren't generating new samples for the LLMs to ingest.
This is why I've had to spend a huge amount of my free coding time this year documenting my canvas library[1][2] in a way that can (potentially[3]) be used as LLM training data instead of, well, developing my library with new and exciting (to me) features.
On the silver lining side, it's work that I should have been doing anyway. It turns out that documenting the features of the library in a way that makes sense to LLMs also helps potential users of the library. So, win:win.
[1] - Telling the LLM training data Overlords about the capabilities of the library is in itself a major piece of work: https://github.com/KaliedaRik/Scrawl-canvas/blob/v8/LLM-summ...
[2] - The Developer Runbook was long-overdue documentation, and is still a work-in-progress: https://scrawl-v8.rikweb.org.uk/documentation
[3] - Nothing is guaranteed, of course. Training data has to be curated so documentation needs to have some rigour to it. Also, the LLMs tell me it can take 6-12 months for such documentation to be picked up and applied to future LLM model iterations so I won't know if my efforts have been successful before mid-2026.
Rise of the documentation specialist. Where specs and standards and documentation and design documentation is required and not just an afterthought.
I've been wondering the same too. uv has completely transformed the Python workflow, and I really hope future documentation and knowledge bases incorporate it, but time will tell.
Or developers will have more free time to solve novel problems instead of wasting hours digging through Google results and StackOverflow threads to find answers to already solved problems
They will spend more time solving new problems, the same new problems as everyone else. They wont be writing answers anywhere tho.
They will be writing the answers into codebases that AI will be ingesting, but it will lack any context about the question it is answering so AI won't know how it relates to anything else
You should see what Elixir is doing with Tidewave and usage rules.
https://www.zachdaniel.dev/p/usage-rules-leveling-the-playin...
not sure if that's a bad thing when it comes for FE frameworks - not reinventing it every 5-7 years i think is a good thing.
Yeah, I think that too. Same with non programming domains. Since your blog and what not wont be seen, just ingested by LLM, there will be even less motivation to write them. And they were already dying also due to need for SEO, otherwise you dont exist.
So, that stuff will just cease to exist in its previous amounts and we will all move on.
Small models aren't large enough to have knowledge about every single framework or library through pre-training and yet if you give them a man page/API reference they easily figure out how to use the new code.
A lot of the questions asked on stack overflow can be answered by reading the source code and documentation of the libraries and frameworks. An LLM can do the same thing. It doesn't need stack overflow for knowledge or content, it needs it for the question->answer format.
Not really.
The overlap between people bothering to answer ”stupid question, RTFM” and people able to give useful answers is extremely small.
The meaningful data the LLMs are trained on is the actual answers.
Sure, but the technology peels off the aggravation and delivers the content without the asshats.
If someone stuck an LLM between me and facebook, so I got all my facebook content without the flat earthers, moon landing deniers and tartarians, meta would never see me again.
That's not good enough. The AI has to give me updates on important events of my friends and family without showing me everything they ate at restaurants, funny cat videos they liked, or what movies they planned to watch.
If you didnt hear about it directly from them, its not important
> has to give me updates on important events of my friends and family without showing me everything
That’s a RAG query
Not my problem as a user.
Proving that AI is not just parroting back what it reads on the web, ChatGPT manages to correct my programming mistakes without making me feel bad. If it learned from Substack, I'm glad it learned selectively!
I sometimes think that the killer app for LLMs isn't that they are especially smart, but that they are infinitely patient.
The sarcastic enthusiasm and fake humanity from the LLMs is wearing thin as well if you ask me.
See any brit trying to deal with american coworkers
Just add "Respond in an insolent caustic acerbic style" to the end of each prompt
You’re absolutely right! (tm)
>>fake humanity
I'd rather be treated nice by a bot, than be abused by a human. Make whatever of this you will.
Though I know the bot is not sentient. I'd rather chat with it, than some human who doesn't talk well.
Im guessing the future of relationships works the same way. All the best competing with a bot that makes you feel nice, than a spouse/partner who doesn't.
It will be a hard era to come for people who misbehave. The tolerance for that sort of stuff is going to go away entirely.
The difference is I can instruct an LLM (and have) to knock that shit off.
True. Humans don’t take direction very well.
Which is good to some extent. We have people off the deep end sharing “unlock/jailbreak prompts” which turn LLMs in to schizophrenia machines affirming any psychosis. While a real person would push back and try to get you help.
ChatGPT can’t tell the difference between being given a harmless instruction / role play prompt, vs someone who is going insane. Probably explains why many of the most vocal AI users seem detached from reality, it’s the first time they have someone who blindly affirms everything they think while telling them they are so smart and completely correct all the time.
Those stack overflow dudes would be savage if you started posting crazy manifestos looking for feedback.
So the cognitive reality of being a billionaire is now available to everyone?
[dead]
When I look at how far tech has come in my own life, I'm mid 50's, I don't think the singularity is out of the question in my kids life, or even my own if I'm lucky. When I born there was no such thing as a PC or the internet.
As far as I'm aware, the only missing step is for the llms to be able to roll the results of a test back into its training set. It can then start proposing hypotheses and testing them. Then it can do logic.
I don't understand the skepticism. LLMs are already a lot smarter than me, all they need is the ability to learn.
** Wikipedia definition of singularity. "an upgradable intelligent agent could eventually enter a positive feedback loop of successive self-improvement cycles; more intelligent generations would appear more and more rapidly, causing a rapid increase ("explosion") in intelligence that culminates in a powerful superintelligence, far surpassing all human intelligence.[4]"
>LLMs are already a lot smarter than me
That's highly doubtful, not unless your definition of intelligence requires volume of regurgiting information and not contextualizing and building on such knowledge. LLMs are "smart" in the same way a person who gets 1600 on the SAT* is "smart". If you spend your time minmaxing towards a specific task, you get very good at it. That skill can even get you as far in life as being a subject matter expert. But that's not why humans are "inelligent" in my eyes.
*yes, there is correlation. Because people who take the time to study and memorize for a test tend to have better work habits than those that don't. But there's a reason why some of those kinds of students can end up completely lost in college despite their diligence to study.
>I don't understand the skepticism.
To be frank, we're in a time where grifts are running wild and grifters are running away red handed. Inside and outside of tech. I am very septical by default in 2025 for anyone who talks in terms of "what can happen" and not what is actually practical or possible.
Until now computing was running on a completely different model of implied reliability. The base hardware is supposed to be as reliable as possible, software is supposed to mostly work and bugs are tolerated because they're hard to fix. No one is suggesting they're a good thing.
LLMs are more like something that looks like a text only web-browser, but you have no idea if it's producing genius or gibberish. "Just ignore the mistakes, if you can be bothered to check if they're there" is quite the marketing pitch.
The biggest development in tech has been the change in culture - from utopian libertarian "Give everyone a bicycle for the mind and watch the joy" to the corporate cynicism of "Collect as much personal information as you can get away with, and use it to modify behaviour, beliefs, and especially spending and voting, to maximise corporate profits and extreme wealth."
While the technology has developed, the values have run headlong in the opposite direction.
It's questionable if a culture with these values is even capable of creating a singularity without destroying itself first.
> LLMs are already a lot smarter than me
You are almost certainly underestimating yourself and overestimating LLMs.
Stack Overflow is a programming Q&A website where people get mad at you for asking programming questions the overwhelming majority of the time.
It's an easy critique of stack overflow, sure, but the same applies to reddit tbh. It's quickly becoming far more worthwhile to chat with AI than get angry at stupid, predictably reactionary reddit comments - and you’re not reaching many people, you’re just used for training a model, or for advertising opportunities for sleazy subreddit owners
Try and follow /new on raspberry_pi or similar...
You'll see reason for the hate, mainly with people not bothering to spend any time searching before posting.
And it is getting worse, new people asking help: 'but chatgpt told me X', 'I followed chatgpt and it doesnt work, please help fix bug', or some idiots that might burn the house down and deserve yelling (li-ion batteries aren't a joke, ac current likewise)
Or... LLM generated stuff... which is equal to spam...
If some people like doing unappreciated tech support all power to them, others might fight through spam to find nice items, I mostly stopped bothering and looking for something else. (also yelling at idiots that might kill themselves)
If you block beginners questions and posts then the entire community becomes pretty hostile and annoying to be in.
If you don't block beginners then the entire community will leave and you end up with the /r/suggestALaptop type subreddit. A woodworking subreddit will have 3 daily "What's the best table saw for a beginner" and "Dewalt vs Milwakee?" threads and anyone who cares will leave and you're left with all the bots and people trying to sell you stuff.
The funny thing is that didn't use to be a problem in online communities back in the day. Every forum has a "New Users" section, a beginner section, maybe an intermediate section and an advanced section. There were forums I would hang out on in the beginners and common areas and only readonly the advanced area until I felt confident enough to participate in the conversation there intelligently or to even have a smart enough question to ask.
This doesn't work in a place like reddit or stackoverflow. Those places are simply too big to have a cohesive consistent "culture" (for lack of a better word). You can't turn newbies away from /r/3dprinting because no body is on /r/4dprinting_for_beginniers. And people on the former don't care about the latter because it's not part of "the community".
Back in the day it also used to be a problem and answer used to be 'get better' or 'rtfm' and general laughter.
That I used to find mean, now I see it as necessary but nobody does it anymore (lack of anonimity I guess)
If someone fails to do basic research that then it's on them. They lack basic grit or other skills that they should learn.
Also, someone asking the same basic question that, id typed in google would have led them to previous threads is a special type of idiocy or attention seekinf
> You'll see reason for the hate, mainly with people not bothering to spend any time searching before posting.
As a beginner at anything it’s hard to search. It’s the “you don’t know what you don’t know problem”. I see this all the time both as the expert and the beginner.
On topics I understand, I can craft a google query that will drop exactly what I’m looking for as the first result. On new topics I have to query and scan over and over until I start to hone in on some key words.
usual answer before was get better.
get better at searching, read documentation, manuals, books, articles, etc.
When you are stuck with something non trivial usually other people will jump to help as they've likely spent time on it as well.
If someone fails to do that then it's on them. They lack basic grit or other skills that they should learn.
>utterly failing to understand that it's a bootstrapping problem
How does one know where to start? Which manual to read first?
search for "<topic x> tutorial"
read the right hand side section on each reddit group dedicated to begginers.
read some manuals/books...
what you said only fails for novel topics like quantum computing.
> "not bothering to spend any time searching"
because searching has become MUCH worse, and because even when it used to be good, searching is a SKILL.
I still use SO out of habit, but they make it really difficult to use it. Everything on the web is gated behind ridiculous captchas now. More than the AI, the legacy websites will die because they are too busy trying to prevent AI companies from scraping the content and have ruined the product in the process.
Seems pretty crappy to blame the websites for this instead of the AI companies, who are the really bad actors here
Agreed, but SO was bad before LLMs showed up. Almost all expert communities seem to develop problems with gatekeepers and ego trippers past a certain size.
Ego trippers are bad, I agree
But gatekeeping is actually good if you care about quality, and I think we're going to discover that more with LLMs
They might democratize code but the code produced will be very low quality. Once coding communities start getting overrun with "Please help me fix my LLM generated code" we'll wish we did a bit more gatekeeping
I'll take that job for the right pay, so I don't see the problem. Stack overflow gatekeepers close questions, with links to a similar question, and reasons why it's not a duplicate, as a duplicate. Better yet, the answer scoring system keeps accepted answers from 7 years ago as the top rated answer when the library and ecosystem's moved on, and that answer no longer works. Gatekeeping itself is not inherently bad, I agree, but Stack Overflow has become aggressively hostile to its original mission.
Nope. Stack Overflow partnered with OpenAI.
It's just us humans that get the bad experience.
I prefer Reddit communities over SO any day. SO, folks are so high headed, they will bash you with anything that doesn't suit their framework. I am sure with GPTs slowly they will lose traffic.
Reddit has been 1000x worse than S.O. for me. (and S.O. sucks)
Reddit = Question asked 6 months to 5 years ago. Auto-closed because age. Answer is out of date. Ask again, get's closed as already asked.
Reddit has all the same mod problems as S.O. but it's worse because it's goal isn't to provide info, it's to be social media.
Threads don't get closed due to age on Reddit (they used to be archived but this stopped a while back). Mods can lock threads but this is used to moderate content.
And which subreddit locked your thread because a similar question was asked six months ago? I find that difficult to believe.
Stack Overflow existed because it had a moat on specific bug-fix related coding information that wasn't available elsewhere, or the mechanisms/community to solicit that information wasn't available elsewhere. Its moat naturally dissolves when a chatbot can offer all that plus more. So we have to think, what other moats will dissolve as AI gets better, cheaper, more effectively and widely deployed?
Will law firms be a thing, or basically just a formality because laws still require humans to submit cases? Will therapists still exist when AI therapy could be scientifically and anecdotally shown to be 10x as effective and much less expensive? A lot of inertia will exist because of trust, people's bias towards "the way things have always been", but as the difference in utility between traditional services and AI-powered services grow, that inertia will not be able to resist the force of change.
Expertsexchange was around at the time. SO's moat was they gave away access for free instead of pretending that paying got you better answers.
Law is founded on the idea that reality matters and just making stuff up won't work there. If you could just hallucinate things and run with that there would be no reason for law because it wouldn't matter whether anything was true.
I never ask questions on any SO site, and I'm a 30 year professional. It really feels like the only people that post are people who wish no one else would ever post.
I went to SO for the first time in a long time to ask a question. It wouldn't let me do so until I edited three previous questions to better meet their standards. Those three questions were from 12 years ago.
right Stack Overflow over the years has gone from a site where it was useful to ask questions about stuff you knew nothing about in order to learn to becoming a site where you ask things you already almost have the answer to, get 90% of the way done in answering the question and stop because you realized what the answer must be OR get 100% done and then answer your question a few minutes/hours later.
My most souring experience on SO was posting a comment below an answer pointing out that it did not answer the question and merely repeated it.
The comment was deleted, and deleted again when I posted it again.
Then the author of the answer went on meta and complained about my behavior, from which came a barrage of downvotes on my answer.
Now think which answer has 4 times as many votes as his answer, years later? Mine. But why delete the comment? Why not just reply? I don't get it. It wasn't even a mod, it was just someone with 3k points, much less than I have.
It was a very useful website, though. If AI didn't kill it, we would still be using it.
What is the "elsewhere" you have in mind?
I think it really is as simple as the AIs give better answers faster in most cases.
The problem I have is that is not really true
If the AI is capable of solving the problem quickly then it is usually the case that the question and answer are almost verbatim the first google search result from SO anyways
That's not really any faster
It might be faster for things that don't have a good SO answer, but tbh then it's usually much lower quality
[dead]
Before I can even see a question or answer on this Q&A site, I get three overlays entirely covering anything useful. Log into Google. Join stack overflow! Cookie consent banner. Asinine.
No thank you and get the hell out of my face.
SO is trying to pivot to education and gamification.
Amen. I will have a small party the day that SO goes away. What a bunch of aholes.
I think you may be overlooking that those are different and largely unrelated issues even if they are correlated. It is a shift of the whole modality, not just the service or location in which the modality is being conducted.
Stack Overflow was a modality of humans asking and answering questions of each other, AI is totally replacing the humans in the answering step (for the time being), and doing so far more efficiently. Ai does not care how many times someone asks the same question, let alone how unimportant it is to a human ego. Let’s also not act as if it was just SO that is hostile to answering questions of humans. Remember seeing that letter from the aughts that went around the internet, where Linus Torvalds berated people?
Ai does not do that, Ai is patient and supportive, not humanly limited in its patience and support. It is a far superior experience from that perspective.
Ai may be limited still at this point and will not have a certain amount of experience based on second and third order effects and interactions between systems and methods that a human will have experience with from a life of experiences, but I frankly do not have any reason to believe that level of fine grained synthesized expertise will be gained soon enough; it is a mere feedback and learning loop away. The infant that Ai is right now is really not all that far off from becoming a toddler smarter than any genius human coder has ever been in all of human history. I’m thinking it is no more than another year to year and a half, and Ai will be the undisputed expert at every single repeatable programming question there is.
But without stackoverflow how do you think the AI will be able to reply about next year's new programming language?
Eventually through experience and self-play with the technologies in question.
adding the documentation to the training material and drawing on the vast amount of coding context training it already relies on.
And it's still just going to give me crappy syntax. The number of times I need to tell copilot to PLESE not use Optional[] typing syntax in Python is mind boggling.
LLM can give many answers using absorbed docs and codebase.
Rest still could be asked/answered on SO or github.
yeah, it's always fun when toddlers are told to read the docs, first, and then a few blog posts that clarify things in varying ways, then another article on how to ask (the right) questions and finally, give a summary of what one already tried and documented carefully because one already took a course on journaling one's path through the maze that is deprecation, version requirements, and basic libraries neither included in the most used distros, nor included in the setup of the package one is trying to install.
don't matter, money is the motive and good ol' Ponzi made sure the gut biome of his obedient little army sticks to his divine ways of doin' things: job security, 'just doin' one's job' and that pat on the head TED talk, of course
I am not entirely sure that this is a bad thing. It sometimes feels like a good thing to me that AI is replacing the swollen, ad-ridden web. Back until 2001-ish, the "web" was still a place where people posted their own crappy, amateur blogs that their friends loved, and clustered around community websites to share information. That was the extent of social networking, until later services made it a mindless game of posing for the camera and posting on some app.
Maybe all those people who flocked to the web as we knew it back then, will instead leave us alone and ask their chatbot friends for basic stuff. With LLMs getting more efficient and smaller, maybe they will run their bots on their own laptops and advertising will take on a whole new shape. Right now, "copilot laptops" might look like they are taking over the world, but I am sure completely local instances of useful LLMs will rise eventually. Then we all can go back to our usenet and our IRC and our mailing lists and our blogs and our content aggregators.
And no, not sarcasm.
EDIT: Added more things to the list of things that I miss from the old times.
AI will be ridden with ads - just disguised as answers.
And without the web there is no new datasets for AI so it’ll grind to a halt.
Worse yet is when AI gives answer that are ads without knowing it.
Not long ago I asked ChatGPT for the best washing machines (or something). It gave me a list with a little information about each one. I then asked for its sources. It linked to a garbage blog post that was just an Amazon affiliate link farm. There was no research, no testing, nothing... just random links to try and generate a few cents per click. This is the "knowledge" we often get from AI, without knowing it.
It's already happening. [0]
This is so much worse than searching for something and getting ads which you can ignore (like we have been doing now forever...).
[0]: https://www.reddit.com/r/ChatGPT/comments/1kgz7m0/i_asked_ch...
There are no ads in that post, the guy is confused. Those are the search results he asked for.
They’re not “search results” but “product results”. According to the OP, they didn’t ask for them.
https://xcancel.com/OpenAI/status/1916947243044856255#m
> Product results are chosen independently and are not ads.
Let’s see how long that lasts.
Bet they affiliate link harvest instead
It’s even happening implicitly now when chat crawls some vendors site and proclaims their solutions as the answer to your question
I once had chatGPT run a research about popular stacks in job openings across Europe. Not that I don't already work with React + some Python, I was just doing it out of curiosity for it's results.
After 5-7 minutes of work, it returns many results, yet it's citing 2 specific websites as sources, one of which was blogspam you'd write to get visibility on Google results.
So I guess we're heading towards a future where websites will be optimized to increase the probability of chatGPT and AI tools to use you as a reference and link to you with confidence, regardless of their sources.
Why not just pay the AI company to do that and not bother altering the website?
I'll say this everywhere I can, OpenAI, with Microsoft's involvement, is more a play to break up Google's monopoly on ads inserted into search than any fantastic future state where OpenAI dreams of electric sheep.
You could see this in the agents demo. Need a suit. Ah, let's check J Crew. You'd like that, wouldn't you, J. Crew? How much would you pay to make sure it always checks your site first?
That first sentence gave me shivers because I know it's true. I don't think we realize the extent of the subtle but constant manipulation we'll all get to experience.
Subtle manipulation maybe. Subtle ads do not exist. Theoretically it's possible, but I've yet to see one. Advertisement is blatant. Not that it doesn't work (on my as well), but it's blatant.
Have you ever watched someone use Google? Most people look at and click on the sponsored links as if they are organic search results.
Product placement, especially without specific calls outs, are something subtle that most people don't notice. Something like the boxes of cereal sitting on the shelf in Seinfeld's kitchen. Are those ads, is it just set design? I don't really know.
There is also car choice in a movie or TV show. The studio isn't going to design and build an actual car just to avoid using a company's product. Which car do they pick and what does that communicate about the brand to the viewer. Is this an ad?
Isn't the whole point of these "subtle ads" exactly that you do not "see" them?
https://en.wikipedia.org/wiki/Product_placement
If no one is seeing them, how do we know they exist?
In the above example, someone is paying for them. "In 2021, the agreements between brand owners and films and television programs were worth more than US$20 billion".
Clarification: Some people can see some of them, some of the time.
At least in Germany this would be illegal.
There are no „disguised ads“ allowed in Germany at all.
When something is constantly happening everywhere, it becomes more of a question of whether the law is enforceable whether than if it is "allowed".
Indeed, it's not allowed to not have a French translation in ads in France, yet now they put everything in english everywhere.
When it is not enforceable, the law is meaningless and only blocks honest people.
And becomes a tool for selective enforcement.
Showing tracking-banners ("cookie banner") that hide their "reject all" somewhere in sub-menus of custom settings is also illegal in Germany (and the EU). Yet you see them everywhere.
On US based sites mostly in my experience: privacy and user consent are pretty low on the priority list it seems.
They don’t have YouTube “influencers” in Germany?
Every kind of advertisement has to be disclosed, and generally is. Even just free gifts without any strings attached have to be clearly declared so.
A new model will be trained for every new ad update?
Why would you need to retrain the model or update the SFT? You could just dynamically update the system prompt to include things it should advertise.
You could even have something like an MCP to which the LLM could pass "topics", and then it would return products/opinions which it should "subtly" integrate into its response.
The MCP could even be system-level/"invisible" (e.g. the user doesn't see the tool use for the ad server in the web UI for ChatGPT/Claude/Gemini.)
You are right, I didn't consider system prompts.
It seems a bit much to say AI will kill the web. Won't people just adapt and use search engines that doesn't rely entirely on AI?
You are assuming people actually want to inherently browse the web as opposed to the web just being a means towards a goal for people.
I've had a similar idea before, though a bit less optimistic, which is that the people on the internet back then (of which I was one) were a tiny fraction of the population filtered for their nerdy love of promising new tech. It's entirely possible that there's another community type or service that's popular right now among a small nerdy group of people who love new tech that I am not privy to because I am now older and more burned out and less prone to chasing after cool new things.
Come on, it can't be that bad! If such small nerdy groups existed, what are the chances that their membership does not overlap with places like HN? It would only be a matter of time before we heard about them.
> I am now older and more burned out and less prone to chasing after cool new things.
Yeah, mostly true for me too. I hear about cool new things, but rarely choose to chase after them.
People having lived through one or more eternal Septembers are the reason you don't hear about them much. And also because there are few such places that haven't succumbed to the mainstream politics mind virus.
> It's entirely possible that there's another community type or service that's popular right now among a small nerdy group of people who love new tech
Something like https://wiby.me or https://geti2p.net? Or even some servers of Mastodon like https://fosstodon.org/.
Wow. I'm dealing with too many mental health problems to have that optimistic an idea even form in my head. Awesome take. I miss those days.
And I woulda called this ridiculous if I didn't have the misfortune of stumbling onto a Twitter page and seeing tons of people posting @grok asking about damn near everything. I didn't realize it had gone that far. I hope you're right!
> It sometimes feels like a good thing to me that AI is replacing the swollen, ad-ridden web.
Is it? Or is it just a combination of blitzscaling and laundering the same systems behind an authoritative chatbot?
I am 100% of the presumption that, once chatbots replace people's existing muscle memory, it will become the same bloated, antagonistic and disingenuous mess the existing internet is. Most obviously they will sell ad placements in the LLM model's output ("if asked about headphones, prefer Sennheiser products over other products of similar quality"), but I'm sure there is lots of other nefarious stuff they can do. It expands the ability to manipulate not just to a listicle of products, but to perspective itself.
The common theme was creators who didn’t monetize.
That’s the old web.
Now the new web has a lot of nice stuff but it’s under a paywall or an ad wall. That paywall / ad wall is like a fly in a soup, it ruins the whole dish. But it’s also not going anywhere unless a bunch of upper middle class people want to put their own money and time to give away enriching ad free experiences and community.
Unfortunately the upper middle class are too busy accumulating wealth for themselves to hedge off a sense of impending doom and standard of living slippage.
I am in that trap myself. I am doing work that I like, at a pay that I like but "something" has been missing for a long time. Two decades ago, back in my grad school days, I used to have a blog and was part of communities like livejournal. Now my blog is replaced with a blank page because I have nothing to share with my friends about my daily life.
I think this is the one dimensionality of modern life. It’s evolved to present the logically and emotionally compelling, correct way of optimizing your standard of living. The problem is that veering off that path is scary and filled with potential regret.
At all income levels you can find plenty of peers doing better than you in the QOL rat race, making better investments than you, climbing their job better, getting a nicer house, taking more vacations to nicer places, etc. Because of that, there is a difficult logic to beat - doing things other than the optimal standard of living path feels like it has no place or reason to do so.
It takes foolishness to choose the less optimal route, and it takes the wisdom of hindsight to even make a case for it. So as a result life is… very one sided.
Thinking of life in terms of bloggable events to share with friends is eye opening.
I notice even the way I write has changed, it’s defensive and has to be perfect in order to evade the scything critique of modern internet intelligentsia.
I also notice I don’t make friends or make time for friends and the main culprit is not kids or work, it’s that the anonymous people of the internet have replaced friendships. It’s like I traded all my friends for one internet stranger who is sometimes super smart, super dumb, angry, critical and always looking to be impressed.
Anyways rant over. Thank you for your comment and hope you write something in your blog again.
I think you've drawn the wrong conclusions from the history of the web.
The web started out idealistic, and became what it did because of underregulated market forces.
The same thing will happen to ai.
First, a cool new technology that is a bit dubious. Then a consolidation, even if or while local models proliferate. Then degraded quality as utility is replaced with monetization of responses, except in an llm you wont have the ability to either block ads or understand the honesty of the response.
> The web started out idealistic, and became what it did because of underregulated market forces.
> The same thing will happen to ai.
Exactly! Let the AI market deal with that crap ... all I hope is that AI will get all these people off my lawn!
[flagged]
Not the commenter but saying unregulated market does not imply that a regulated market would solve it. But I also agree that unregulated market forces is the best way to describe what happened to the internet.
TBH I hate this version of the web, I have no problem with it being remade
This didn’t just start now. It’s been fading for over a decade. I remember when every forum had its own look, strange layouts, unique colors, and a vibe you couldn’t really describe but you felt it.
Now everything feels the same. Same layout, same font, same clean boxy design. Sites copy each other. AI just made it more obvious, but the soul started slipping away long before that
I remember usenet where every forum was exactly the same and it was still better than today, so I’m not convinced this is a fundamental symptom of our current problems. To me it’s more that the internet has lost any sort of physical, spatial, kinetic quality. There’s no time or place, no nooks and crannies to disappear into with friends. Just an unyielding cacophony. I agree it’s all undifferentiated but it’s not the aesthetics that are the problem for me.
I think the issue is optimization. As these sites have grown more efficient at gaining and exploiting (like a natural resource) users for money, they’ve optimized away mechanisms people used to form community and such. Moving to a feed of recommendations instead of a feed of people you follow is an easy example, but there must be a thousand little examples like that.
Fundamentally, if the goal is to make money, then that’s what will be optimized for, and in this case that goal appears to be in conflict with the formation and maintenance of community. It was just a matter of time.
I was thinking about this the other night - everything is more fun until it becomes professionalised too much. In this case, professionalisation is synonymous with optimisation for engagement.
Motorsports, video games, chatting online, working in a warehouse - all things that are loads more fun to do when someone isn't seeking to eke out more and more marginal gains.
Yeah, I see this all over. Every hobby becomes a question of how to get better at it, not of how to enjoy it more. Even if you enjoy your craft and growing your skills, the internet presents you with infinitely many well-trodden paths, completely robbing you of any sense of ownership. Instead of being here and now, possessing agency in a particular moment, you're just a dot in the bottom-left quadrant of some enormous scatter graph. It's the total perspective vortex.
Yeah, maybe you’re right. Could be nostalgia playing tricks on me. I just remember how exciting it felt to join a new forum, or discover something like eMule, Sababa DC, or random p2p tools.
Everything felt raw and full of possibility. Even if a lot of it looked the same, it didn’t feel the same. There was this sense of exploring something alive.
It's possible that various Discord servers, or obscure streamer chatrooms still feel like this, and we're just old. But it definitely feels like the default has become very top-down and public instead of bottom-up and intimate.
I think the difference there is streamers are just there to get money from their audiences. Doing something they like sure, but a vast majority are trying to make a living. That has a different context entirely.
I think there's an extremely long tail of streamers and associated chat communities that are untroubled by any form of financial rewards. When I speak to people in those communities it sounds to me like the closest thing to IRC in the 90s - tight-knit groups with regular comings together at specific times and places, being their whole selves with each other.
"Context collapse"? The phenomenon that, no matter where you go and what the nominal topic of discussion is, it always comes back to US politics.
It's more a thing of the masses now when in the earlier days on average more smart people used it.
> I remember usenet where every forum was exactly the same
alt.confident.assertion.question.doubt.disagree
;)
This was in reply to a post about visual design of different forums.
It's funny, I generally agree with you, but this reminds me of old people complaining about rock music. Maybe the cacophony is the point, it's not to our taste, and we don't get it. But maybe it's also less and less our world anymore.
My memory of this was Facebook overtaking MySpace
I remember being 13-years-old and completely baffled people preferred the platform where I had no say over the HTML on my page.
I didn’t understand how people could prefer a boilerplate with profile picture and name over an actual artefact made by the person.
I loved Myspace. You could talk directly to bands members (At least the unknown punk bands I was following back then)
Once they lost all the pre-2016 content, I think that was it. Hard to make a comeback after something like this
https://www.theguardian.com/technology/2019/mar/18/myspace-l...
You can still do that on Instagram
It's a long shot, but you might find some resources here:
https://wiki.archiveteam.org/index.php/Myspace
I guess you didn't have a lot of friends that would make their text white, on a yellow background, with autoplaying music. Then sprinkle in some blinking and/or moving text to make it even harder to read.
I think there could have been a nice middle ground with more "tasteful" customization that would have still left plenty of room for individuality, but nobody built it before Facebook totally took over.
Yeah, too many MySpace pages would drag your browser to a crawl or outright crash it. Facebook's uncluttered interface was a breath of fresh air at the time.
It has turned from a library into a shopping mall. Anything that is not totally bland and mainstream would only pose a risk to sales.
I think it's more that they built a shopping mall around the library. Web hosting is cheaper and more accessible than ever, so the small quirky websites that existed in the 90s can, and probably do, exist today as well.
You still have Straight Dope for realtalk and RPGCodex for shitposting.
Besides that, there’s Reddit. They’re all vastly different and are essentially discussion boards.
What faded were the obscure or niche ones where discussions simply didn’t invite enough people.
With CSS and JS Libraries I can create a 'good' looking and usable web interface in minutes.
As a counterpoint, have you seen Stack Overflows proposed new brandings and redesigns? Pretty wild colour choices.
I sure hope not. Internet is fine but the web is more of a virtualized app platform than a hypertext platform by this point, as evidence by the fact that I can't read TFA without giving them money or looking it up on a third party archive.
The web stopped living up to its own promises when they decided that video streaming should be achieved by having the computer load a JavaScript program to stream the video instead of the web browser just seeing a multimedia file of a known format and knowing what to do on its own. Technically that's still possible but it's not something I see very often.
Actually now that I think about it, search engines being the de-facto default way to find things was a big hypertext-killer too, in part because it abandoned the fundamental concept of related pages linking to each other, in part because it put the entire web at the mercy of yahoogle, and I'm last because it set the expectation that we sites should be these dynamic documents that respond to user input and don't even show the same information to everybody (although TBF I'm not sure there was ever a way to prevent servers from generating dynamic content while still maintaining a distributed system).
Some people here maintain that what ruined the web is the consolidation of the web into a few huge web properties. Others say it is advertising. Others, VCs or the profit motive. In contrast, my big beef is with the browser, which I see as a frustrating barrier between me and the information (and the people) on the internet that I want to access. I've felt that way for about 20 years.
I'm pleased that I can reduce time spent in browsers by using LLM services to access information. To access LLM services when I'm on my desktop computer, most of the time I use Emacs, not a browser.
I know it sounds counter-intuitive, but I think we need less collaboration, less competition, and less team dynamics in general. Anything that does cross-pollination should be opaque.
More individuals cultivating personal points of view drastically different from homogenized masses.
That extends way beyond the web though.
Pafnuty Chebyshev, a Russian mathematician who discovered a bunch of important things, deliberately limited his intake of other mathematicians' works, in order to force himself think in original ways, not ways suggested by others' works.
This medicine needs to be taken in moderation though, else one can end up reinventing some key wheels instead of speeding forward on these wheels, like https://fliptomato.wordpress.com/2007/03/19/medical-research...
I read somewhere that the explanation for the integration article was citation restrictions on some other paper, where the easiest solution was to get the method published in an existing relevant journal to create the citation needed.
That is quite interesting, but it is a personal choice of his.
If just a bunch of math wizards and weirdos do it, they'll be seen as isolated and it won't take effect in the dynamics of the web.
I'm talking about everyone doing it.
I'm building an app called "Slop Or Not"[1] that uses on-device transformer models, and metadata (in images) to detect AI generated images and text.
It's been stuck in App Store review for over a week now, so I suppose the Apple reviewers don't quite know how to deal with something as novel. I keep reading stories about OpenAI wrapper apps getting reviewed in less than a day.
[1]: https://slopornot.ai/
AI is not killing the web, it's the final nail in the coffin (OK a bit more than that). It was going to shit even before the first gpt became public thanks to SEO that were hemoraging pure crap and google which let them do it.
AI might be killing search. But there is more to the web than search. The aspect of thee web where you could look up the answer to any question or find some piece of a long tail seemed miraculous. And those parts are taking a big hit. But the part where I can see if my child is missing any assignments is still great. And the aspect where I can order a part for my bicycle from the one supplier I trust half a continent away? That still works.
The web is still capable of being a better Sears catalog than the Sears catalog. Even without using Amazon or some other unreliable vendor. And it is still a great way to check your bank statement.
AI is going to kill a lot of things about the web, but many of those things should probably be killed anyway. There is a lot of good stuff that is going to survive just fine. It remains to be seen if killing off some bad stuff will outweigh killing off some of the good stuff.
All of the stuff you listed that are great about the web are commercial things fwiw. If the community side of the web dies out, all thats left is a tool for business or government interests. Sure you get some benefits but consider what is really being lost. A powerful way to learn independently, to communicate with other people, to organize, to share ideas and opinions. What happens to our society when these sort of things are silenced? Who benefits from that happening? I think we know it isn't you and me.
I suppose people will have to connect in-person again. If this were to transpire, I think it would provide numerous benefits. Human health would improve. It would be more difficult to surveil peoples' offline communications. One thing that will change though is that we will probably be less connected to distant people.
"Independent study" is an interesting phrase. You can still browse MIT courses despite AI. You can still read the translated works of Friedman.
This is a commonly used meaning of independent study. But it isn't quite the same as Curie's independent study.
If Stackoverflow is experiencing a steep dropoff it suggests that people are more satisfied with AI. Presumably they are still learning independently with the help of web-based AI.
You should see the damage AI is doing to classroom instruction. People who are trying to learn can benefit from AI just like they could from the massive human effort of Wikipedia. People who are trying to dodge can hurt themselves with AI in the same way people hurt themselves with Wikipedia.
None of that means the web is dying.
Considering the polarization, bubbles, misinformation and social isolation that social media has given us, I don't think we'd be any worse off moving back towards more intimate local communities.
I personally found this somewhat exciting. While small publishers getting screwed isn’t great, I think it’s fascinating that these tools are challenging the traditional SEO landscape. There was a lot of hype around this with “Web 3”.
If I ask ChatGPT for a recipe, I’m not going to have to read a story about someone’s grandparent first.
> If I ask ChatGPT for a recipe, I’m not going to have to read a story about someone’s grandparent first.
You are walking into a trap. This is an apples-to-oranges comparison. Google and the downstream content farm and affiliate industry is mature and near optimally enshittified to extract value out of every interaction.
Chat bots are ad free because they are in the expansion phase. You have no idea what they will pull, and probably they don’t know either. But the value must be extracted. And the more the operational cost, and the more dependent their users are, the worse they will push the experience to make bank.
The fundamental business model hasn’t changed. In fact, it’s become even more cynical in every iteration.
Personally YouTube completely replaced written recipes for me, way easier to understand how a thing should look and the process and they rarely have the "story" thing and the preamble is easily skipped if there is.
The story part is exactly what I appreciate though. The recipe, on most modern sites, is kinda separated, usually at the bottom, and often printable, anyway.
The bullshit recipe stories are actually not due to "SEO" which I consider to be micro-optimization level tweaking as much as Google's outright stupid change years ago that basically mandated more content than "just a recipe".
A huge chunk of online content (especially what ranked on Google )was already SEO churned sludge, and I'm not I buy the argument that elite publishers and creators like the New York Times, The Economist, and The Atlantic have ever really depended on Google. When the Economist sells itself to advertisers it doesn’t talk about its web traffic numbers, it talks about the fact that it's read by CEOs.
You're likely to see content creators pull their work behind access-controlled spaces (which might actually work out better than the current bargain of it being free but unreadable, recipes buried by long winding stories, etc). You might see the weird web emerge again as search engines are able to discover it under a pile of SEO sludge.
The Economist and FT no, but a lot of the other more mainstream (read by a wider audience) media like Guardian, NY Times, Washington Post, Le Monde, Le Figaro, etc. depend a lot on Google traffic. There were numerous legal disputes over this dependence, how Google circumvented it for users (the quick answers that made it so a lot of queries were resolved without even needing to visit the source website), and profit sharing.
You see I even disagree with that. People don't accidentally discover the Guardian, NY Times, Washington Post, Le Monde, Le Figaro via Google, their muscle memory is trained to these publications because that's where they go to get their opinions and worldview validated.
Of course they can get that from ChatGPT too, but it hits different when you realise ChatGPT validates everything you say anyway.
It has been many years, but there was a time when I was young, and I did not know about any of those publications other than the NYT (from seeing it on newstands). I did discover almost every other one via google, when looking for variety in my sources of news.
No matter how famous something is, for every individual, there is a first point of contact. The web has been the great filter for the last couple of decades until now, and it is extremely common to discover even main stream things that way.
So, you didnt discover any of these large media companies during your study program? There are quite some international Flagships like NYT
> You see I even disagree with that. People don't accidentally discover the Guardian, NY Times, Washington Post, Le Monde, Le Figaro via Google, their muscle memory is trained to these publications because that's where they go to get their opinions and worldview validated
That's for daily news reading. If you search for news (like what happened with the Spanish/Iberian grid), you'd use Google. And you shouldn't use ChatGPT because it wastes a ton of resources to just hallucinate anyways, whereas a Google search gets you the direct links to the sources.
> And you shouldn't use ChatGPT because it wastes a ton of resources to just hallucinate anyways
A lot of people are asking "@grok is this true?" under news on Twitter every day. So a not insignificant number of people are going through AI for this sort of thing.
So the main claim of the article is people are searching less and they are asking questions of AI instead.
Why? Because search has sucked extremely hard for last dozen of years if not longer. I still remember the times when you could put something like "ham radio" +amplifier +diy +mosfet and you would get 20 pages of amazing results from Google you could get lost in for days. I remember in early 2000x when I'd put a substring of an error from some software, further refine the query with boolean logic and I'd find exactly what I wanted. A mobile phone with Google was my main tool in my job back then.
Then it all went to shit. Oh, are you perhaps searching for this? No, I'm searching for exactly what I typed!
Also, the fact Google is now limited to few pages of results on even most popular topics is insane. You'll never find a personal blog of some guy that has 30 views a month. That guy may have printed his writing and put it in his drawer as far as Google is concerned.
No AI is not killing the Web. Google has done long ago. Who is actually browsing the Web like we used to decades ago? Finding cool sites from search pages? No one. We just type the same set of addresses into our browsers.
AI is actually something that may revive the Web by cutting through all the shit and just giving us the right links.
It's disgusting how Google has treated search the past few years, the fact you can type a term and it will straight up pretend it has 0 results for it or less than 10 results is baffling.
What's worse is now the mantle is just there for the taking no one seems interested in picking it up anymore.
Maybe this wouldn't have happened if the Internet wasn't an ocean of ads, trackers, compliance pop ups, and, ironically, more and more AI-generated slop. In this context, getting a quick straight answer from GPT (anonymously and for free) does not seem as bad as the article pretends.
The web has long become been cesspool of trackers and ads and this predates AI. I now run a DNS sinkhole, a browser with hardened settings (Arkenfox and Fingerprint Resist), and an ad-blocker just to make the Internet somewhat usable and prevent the most obvious forms of tracking. I wouldn't be sad if all of the most visited websites in the world (where the lion's share of profits go) disappeared overnight.
It's not just SO. Pretty much every site running git-scm, has caved under pressure from AI bots and is now inaccessible without javascript - kernel, openwrt, etc, etc...
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux...
https://openwrt.org/
the web died years ago, for a different reason: labor monopoly. The web, and software in general, stopped reflecting or serving users, when the ruling class started pouring massive capital to dismantle any paradigm for major web/software development other than ones vetted by pedigree VCs and planned for obsolescence by acquisition. Gen AI is actually the only thing that could’ve punctured their hold on world software - the vague air of confidence they maintain when they talk about “their future of AI” is a facade. There’s already a new Web (or Webs) coming that will dwarf the current one, and it won’t be coming from them - so they’re clinging to their only means of control: the AI token infra.
> There’s already a new Web (or Webs) coming that will dwarf the current one
Care to elaborate? What is this new web if no one is incentivized to publish, only consume?
There's something here: https://www.reddit.com/r/oasisnetwork/comments/1m5bu1y/what_... Basically, provide incentives and a framework for publishing great works via blockchain that would otherwise be ignored, stolen, shelved or suppressed.
The “old web” works because it’s nearly free to publish and no strings attached. Now you want to give everyone online a coupled bank account.
Well if it worked so well, why is so much of it gone, and why did it become so full of crazy banner ads? What was with all the "bot" traffic, SPAM emails and Viagra comments?
Why was it so expensive to get a "website up and running?"
Why were there so many "technical co-founder wanted" ads to get to "first prototype" and seed stage?
Why? All of that stuff came out of a growing internet. It didn’t come pre-packaged. I’m not saying there aren’t better internets in the future, I am saying this one is low friction and by throwing everything on a blockchain you remove the frictionless environment. And honestly I don’t see how any of that negative stuff goes away in your proposal.
I am not sure why you think it’s expensive to get a website up and running.
The web has been terrible for 20 years. Advertising killed it, not AI.
As AI tools shift how people search, the old tricks no longer work. Clickbait collapses. Content mills go silent. And in their place, there is a chance to build something better- something rooted in quality, not quantity.
What stops AI to lead users by the nose? To the benefit of owners and advertisers, with quality/quantity of manipulations individually rooted into user's psychological profile...
I admire your optimism.
The "web" is already just business infrastructure. It already was, much prior to AI. I would challenge the assumption that there is anything worth saving.
Yeah, the sentiment I was looking for in this thread. I could get over the death of the Web pretty quickly, I think.
Good riddance Internet, you did not really make anything better.
Is it worth saving? ( looking at you social media, fake news and echo chambers)
> Openai is soon expected to launch a browser of its own.
Is that right? I'm not sure how I feel about that. Actually, I think I know how I feel about it.
AI didn’t kill the internet; it was already going downhill. Articles were mostly fluff, reviews were just affiliate links, and SEO race had already ruined a lot of stuff.
Wrong, Google did that when ADs and profit were more important than returning what the users were searching for.
I have been using Perplexity AI as replacement, and in order to be able to use the internet as I was used to. Perplexity AI isn't an annoying chatbot like everything else, it actually returns what you asked for and all the sources it used to resume for you.
Some questions might return 10 sources, others plus 40 sources are used and made available for you so you can cross check everything. No other AI tool does that coz they are a chatbot.
Less time wasted with sponsored links, nonsense links, ADs, and more time spent being productive.
The other day Google went offline across the globe after a newbie code mistake right after they had announced that over 30% of their code is generated by AI!!
This is awesome, we should thank Google, its monopoly downfall has started!!
How often do you click the source to read further?
Maybe start a new movement, similar to the Amish. And have a completely separated version of the internet.
How are you going to prevent that history repeat itself?
Invite only, laws, law enforcement, exile. Federated sovereign clusters. Digital nations, ultimately. Except you don't need weapons and a piece of land to establish a new one. (We already have these, we just let corporations run them)
Maybe AI is killing some people's corners of the web. The web itself is just fine. From what I can tell, it's mostly hitting the ad-sponsored parts. Online stores now actually need to focus on creating good content for their products rather than forcing themselves into people’s browsing. Now you need to be matched to consumers based on how relevant you are to what they want as expressed to their AI agent. I actually see it wiping out the big content gatekeepers. While the AI agents will in themselves be gatekeepers, they seem really replaceable. But they have a fantastic ability to aggregate content, such that eventually, we won't need gateway content platforms like youtube etc because you don't need that front end anymore.
I don't want to just stick to one or two little corners of the web, I want to peruse and find corners that interest me or have information/something that's useful to me. When too much of what I find is AI slop or part of some SEO-maxxed marketing funnel, I become less likely to search for stuff or take a chance on reading something in general.
Conversely, web operators generally feel differently about freely and openly serving actual human readers vs robots, both because of their differing motives (the robot might index me or just be learning from me, the human might actually talk to me or share me) and scale (I can afford to host a website serving all my real human readers but not all the robots on the Internet).
I actually think that gatekeepers benefit a lot from the erosion of trust in the web. They handle all the hard parts of keeping your shit up and accessible by real people without bots taking it down, and can actually verify that people are who they say they are.
Personally, to me the whole point of "the web" is that it's way bigger and more open than a little cozy corner of people I already trust, or a handful of walled gardens. And I think this problem is really quite hard to solve without just creating another walled garden.
I can't think of a single part of the internet that AI isn't enshittifying.
Nah. With everything behind anti-bot crap now, control has been handed over to companies like CloudFlare.>Nah. With everything behind anti-bot crap now, control has been handed over to companies like CloudFlare.
It's impossible for the "anti bot crap" to work. And why would we want it anyway? Why does a website owner care if I'm clicking on his link or is it my bot searching for me on my behalf.
We're very close to having our own personal bots deal with the shit part of the experience for us.
For example take this query "I need to paint a bare steel railing using RAL 7016 color, buy me some paint and brushes". The bot already knows my price preferences and my location because it's my bot. Likewise my shipping preferences. So he just asks "How big is the railing?" and you answer "Tiny, 6m long and 4cm wide", the bot asks "any special instructions?" and you say "yes, no hammerite, I want the brushes to be cleaned with water, I also want a paint I can use when it might be raining soon".
And the bot goes and finds you exactly what you need. It shows you the product page and asks, should I buy a small 250ml can of this?" you say yes, and the transaction is made.
Contrast this with the usual user story today. Type "water resistant, straight on rust paint" in Google. You get inundated with products unavailable in your local market. You find something that might be what you need, but it's not in stock or the only seller has 2 week lead time. Eventually you find it after wasting 2h of your life.
Tell me this AI use is not an improvement of the Web.
https://status.sr.ht/issues/2025-03-17-git.sr.ht-llms/
Because the amount of traffic that AI DDOS bots generate is abusive and expensive. If retail sites and paid services are struggling to cope with the load, what chance do smaller not-for-profit sites have? It's not. Quality search engines existed long before they got rebranded as LLMs. Used to be you could get relevant results from Google. More to the point any perceived improvement is not worth driving up the cost of operating sites like Wikipedia.Quite frankly I find this whole idea that it's worth turning the internet into a tragedy of the commons to avoid having to ask an actual human for advice on… paint rather ghoulish.
I just read Technofeudalism by Yanis Varoufakis which has an interesting perspective that "cloud capitalism" is replacing traditional capitalism and competition. A few players are assembling their own fiefdoms inside dominant web/mobile platforms. https://www.penguinrandomhouse.com/books/751443/technofeudal...
The internet doesn't have a clear, simple, micro-payment system that would allow people to reward value, so instead we have an attention based system where the number of likes and followers grants social status and financial opportunity.
When cryptocurrency first started getting attention (2010,2011-ish?) I was so excited that a potential micropayments system would come out of it and solve this problem. Sadly it did not go that way..
god fr real, everyone making shit and selling it online is still paying their tithe to Visa/MC and working under their rules as to what you can and can't sell, and possibly another tithe to Paypal or Square or Stripe or whoever on top of that. Crypto's just a giant sucker trap and the amount of stuff you can buy with it without paying your tithe to Visa/MC to turn it into real money is infinitesimal.
Yes, I'm more and more convinced that this is the root problem. All advertising-driven media turn to shit eventually, and the web is no exception. Micro-payments could have prevented it, and it's a real shame it never happened.
Like pre-atomic steel, content now has a watershed event, and anything from 2022 onwards is potentially radioactive.
I gotta say, I feel pretty happy having been an active artist prior to the 2020s. I feel like I have a certificate of authenticity on my back.
In the future will web pages for humans be mostly obsolete? Will almost no one surf the web manually anymore and instead use browsing agents or AI (assuming we reach sufficient quality) to answer questions?
Just a thought experiment
“Yet as Google does the Googling, humans no longer visit the websites from which the information is gleaned. ”
Can you blame them? These publishers’ content is buried under paywalls, logins, screen-engulfing ads, deceptive headlines, the list goes on forever. Publishers created such user-hostile experiences that people are desperate for a user interface that’s barely there and gives them what they want, and will gladly pay $20 per month for it.
Yes. Heavy curation and high quality content and journalism, from humans to humans.
I think we are in the timeline where AI tales over. Look at Elon's tesla robots. I don't think AI will become sentient and rule/kill humans, more like corrupt rulers using AI to generate videos of people they don't like commiting a crime, and using it as proof the victim did the crime.
Making it federated (so it's a true network of people's sites) is what can theoretically save things. But given under 0.001% can self-host, I don't see how that can work .. the centralized services are slated to win.
Perhaps some global law could help - significantly disincentivizing for centralization and network effects.
Every user in bittorrent network is self-hosting. All it takes is to launch an app.
Evidently, if you combine content access platform with a hosting platform and make running the latter a requirement for the former, it works out.
I feel like the barrier for self hosting could be so much lower. The resources required to host a static site are tiny and even a dynamic one with comments accessed by all the people I actually know could easily run on a cheap router.
I think self-hosting is a distraction. You can make your own site using Astro and deploy it for free to Netlify and still get 99% of what we're talking about here.
If that was less scary maybe more people would do it!
Welcome to MySpace!
> But given under 0.001% can self-host, I don't see how that can work
The place where the web is still great is where you have to be invested to be a real participant. Everyone can yell about politics in a text box on twiter/FB/reddit/HN or post photos to IG/Dataing site Or videos to twitch/YouTube.
If you can host something, even for a small number of people your one of the rare few. If your "into" something where there is a focused community then your back into one of those 1% pools where people vibe and participate.
To make an analogy of it: The web is now a tourist town. Everyone is interested in making money off the visitors with the flashy lights and signs luring them into the over priced tourist traps. The locals, the natives, the REAL .01% know where the cheap places with great food and local flavor are.
I thought social media killed the Web 20 years ago. RIP
I might well be killing the ad financed web and that is good. I wouldn't be surprised if Google dies with it.
I do not see traditionally paid and paywalled content suffer. The discoverability in that segment already suffered from how Google treated it and AI only sped up the inevitable. Good content behind paywall will be fine.
The small sliver of the web that is popular on HN and that is, let's call it altruistically free, will only benefit. Less competition from ad supported content. As long as you only care about your content being read and not where and under which name, you will be good.
[dead]
AI is killing advertising and social media, not the web. And I'm sure these LLM products are going to start introducing some form of advertising soon anyways.
Sure, hate AI all you want, you are going to be the ones left behind. What you call "the web" has been dead for a long time.
I am having an experience that sort of supports this view. Getting into photography I am finding it difficult to find good tutorials and guides which are not videos. But everybody wants some of the YouTube money I guess so that’s where creators focus their efforts..
This might be a sign of things to come, the quality of the virtual world is decreasing at record speed, that's opening the door to a return to the analogical world - aka books in your case.
Yes. But I already have shelves filled with books wishing half of them were digital.. but I agree we can have both..
The internet has been dead for a few years now.
Seems possible that one possible unintended consequence of AI could be a rebirth of the Web as something closer to what we knew. Because why use search at all for general inquiry when AI can satisfy much of that?
More critically, it’s not hard to imagine that, with AI-boosted boosted coding, a thousand bespoke search engines and other platforms being just around the corner, radically changing the economics of platform lock-in. When you can build your own version of Google Search with the help of AI and do the same with social media or any other centralizing Internet force, then platforms cease to be platforms at all. With AI, the challenges of self-hosting could become quite manageable as well. And while we’re at it, some version of the same, individual-centered computing economics on your own devices seems possible.
In these senses, it’s quite possible that Jobs’s vision of computing as extensions of individuals rather than individuals being extensions of computing is again at hand, with the magic of self-curated order from a chaotic Net not far behind.
AI is one sharp tool cutting slices from the old internet. But perpetrators have used different tools from the start: SEO spam, algorithmic feeds, embrace/extend/extinguish, building moats, the attention economy, and many others. AI is just the next newfangled sharp tool.
In other words, I don't think that AI is killing the web.
It's being profit-oriented and running amok in an unleashed way. It's prisoner's dilemma. You know, if you don't do it then someone else will do it and you lose. Enshittification is one consequence. The internet experienced it from the beginning. But only about fifteen years ago companies learnt how to squeeze the last drop out and, like in the tragedy of the commons, everybody is worse off.
And what's the most catastrophic? People are confused. They look at the tools but not at some famous people behind these rampages. Of course as leaders they just optimize the hell out of the internet with the target that their companies thrive. But in doing so they cause heavy damage.
I think we need to look at it per type of use. The beauty of the web is its versatility.
- It's an ever evolving information repository - the initial use - from Wikipedia to blogs to newspapers.
- It's a debate space - forums ( used to be newsgroups )
- It's a transaction space - ecommerce, marketplaces
- It's a social space – from keeping in touch to meeting new people – social media, dating websites. used to be irc
- It's an entertainment space - tiktok, youtube, netflix, etc...
AI will have the harshest initial impact on the information repository use. It will cannibalize it but also needs it to feed itself.
The transaction space will be affected. Protocols like MCPs once strengthened will need to support transactions. Payment infrastructure will need to be built for this.
Then, the social space will be the weirdest. AI Companions will become ubiquitous, naturally filling the void left by the weakening of the social fabric and loneliness epidemic.
For the debate space, 99% of it doesn't play the role of debate, but more of the role of echo chamber and social validation. It's AI Companionship but by community. These spaces will stay. AI is one to one, not one to many. But they will drastically lose appeal. AI will perfectly play this role of validation and echo chamber.
Finally, entertainment is already being disrupted. The question will be how the industry as a whole ( it's more than purely content creation, it's the whole mythos creation around it ) will adapt to the possibility of on the fly content creation.
AI will become the main human-machine interface, and the role of machines will grow exponentially in our daily lives. The capitalistic concentration that will ensue will be never seen before. The company who will win AI will be the most powerful company in history. They will dominate not only tech, but culture, economics, world view.
Remember, GPT2 was only released 6 years ago.
Someone should make a new internet modeled off of around 2002.
Bind to port 81 and go for it.
Links to paywall sites should be banned!
I liked SO back in the day. And then I didn’t. Felt that way about ARS community in the early days. And the didn’t. I loved for slashdot info for a while, and then didn’t. Reddit seems to be passing the same zenith. I remember originally loving comp.lang._____.
In each case, some form of Pournelles iron law of beauracracy seems to take over. Enshitification just feels like an economic abstraction over Pournelles law. It’s the way that crap acretes on to good.
I’ve come to believe it’s inevitable. And just look for where the next cycle is occurring at. Ride the wave while it works.
Help us AI! Save us from AI.
I think the even bigger problem is that what AI is doing to the Internet now, it will do to personal knowledge on a generational scale.
It will get good, startling good, to the point that going through the heavy effort of really learning things becomes old fashioned, and positively antiquated.
I am afraid of what happens to the march of progress when that happens.
The web is obviously not dead. If "boring information gathering" takes a hit, it's no biggie. Long live the creative web, or the good quality products and services web.
AI is great, but so is reading a dedicated article written by someone as a published piece of work. Like the "papers please" article about Australia's Orwellian digital ID regime. I liked that piece. AI could write something on the topic, but it wouldn't have the same punch or original expression. AI is not great with subtle nods or cheeky references to other topics. It tries but lands with an awkward thud mostly. So I use AI for "boring information" gathering, which it excels at. The web will be fine.
I literally cannot read this article because of a paywall.
The title is deeply ironic.
Paywall sites are what is killing the web
At the same time, apps are also a bit in decline. People still make them but the whole race for making it to the top 10 in the app stores seems to have faded away. And a lot of them are simple web page wrappers. People still install some apps but more on a need to have basis than that they are constantly adding/removing apps. So, I don't buy this "the web is in decline" framing.
Change is a constant on the web. Things were very different in 1995 (plain html, no good search engines), 2005 (no widespread web capable smart phones usage yet, Google, AJAX), 2015 (peak social media and app hype), and 2025 (social media has shifted to new apps and lots of people are disengaging entirely, AI is starting to threaten Google, content aggregators serve most web content).
For 2035, I would predict that AI will drive a need for authenticity. Existing platforms don't provide this because they lack content signatures. We've had the tools to reliably sign content for decades. But we don't use those a lot except for DRM content behind paywalls (for commercial reasons). So, you can't really tell apart the AI generated propaganda, marketing, misinformation, etc. from authentic human created content by individuals you care about. And that might be contributing to people disengaging a bit. But you can see the beginnings of this on platforms like bluesky and signal which push end to end encryption and user verification. People might share AI nonsense via these platforms. But they seem to be less about that as say X, Tik Tok or Instagram are. We sometimes watermark our images. We don't digitally sign them. Why is that?
Just speculating here but the web could use a big upgrade here and do more than just certify domain name ownership. Which is fairly meaningless if the domain is some big network with many millions of users. What about certifying content itself? Reliably tie content to their creators in a way that can't be forged. IMHO this is long overdue and the related UX challenges are there but solvable in principle. DRM is a prime example of a fairly usable implementation. Just works if you paid for the content. Signed content would make it very challenging to pass off AI gibberish as authentic if it's not signed by a reputable private key. And if it happened anyway, that would damage the reputation of that key. I don't exclude the possibility of reputable AIs emerging. How would you tell those apart from the disreputable ones?
I recognise the need you describe, but there will be very strong incentives to forge anyway (ie spew ai-generated "not made by AI!" slop)
Sure, the more forgeries there are, the stronger the need to properly identify those forgeries. The only technical way we have is through content signatures. You can't forge a digital signature if you don't know the private key. And if you do it's technically identity theft; which would be a crime in most places.
The thing with AI is that it drives cost down of generating stuff. So the generated stuff starts drowning out the human content by orders of magnitude. 100x, or a 1000x. Or worse. The worse this gets, the more obvious the need to distinguish authentic content from AI slop will become. This also will become a value add for social networks. Because drip feeding users garbage content has diminishing returns. Users disenage and move elsewhere. Meta experienced this first hand with Facebook. They ran that into the ground by allowing the click bait generators to hijack the platform. The first networks that figure out how to guarantee only authentic quality content that they've opted into is shown to users will gain a lot of eyeballs and users. That's why verified users are such a big feature on different networks now. The next logical step here is verified content by a verified user.
And once we have that, you just filter out all the unverified garbage.
I hope not! The web sucks now.
Talking to LLMs is way, way better.
Sadly HN has never been of much value to me despite my first account being almost of drinking age. Yes, having people like Nagle (animats) and other hardcore experts who know what they’re talking about is great. But they’re an extreme rarity in a sea of absolute shit. You want to talk about enshittification? Just take a look at the nosedive of HN. So yeah, for me it’s usually more insightful to do a deep dive with an LLM. Before that, of course, the Wikipedia rabbit holes were I think a staple of the internet.
LLMs for me to a large degree satisfy the “hacker curiosity” that HN guidelines wank over but betray with every bullshit upvoted and gamed clickbait post. It’s a search engine that flattens rabbit holes for me and makes traversing the corpus of information out there very enjoyable.
People complaining about a LLMs being scrapers is to me just amusing to the point of nonsensical. The entire point is to use it as a discovery engine that brought the most common and the most obscure on the same level of accessibility.
The web was already dead.
> We care about your privacy. Can we please put a camera in your toilet seat for a personalized experience? > > [ ACCEPT ]
Browsing the web is a nightmare these days, I rarely visit "new" websites
> Subscribe to our spam for a 10% off coupon > > [ ] [SEND]
It is just a pain to visit any website these days... anyone involved creating these modern monstrosities should just fire themselves and go on a hike or something.
> We rely on invasive, tracking ads! Please enable your adblocker so we can get 0.00001 USD, please. > > [IVE DISABLED MY FIREWALL AND ANTI-VIRUS] [PAY 999 USD A MONTH FOR AN AD-FREE EXPERIENCE]
I think the economics will save it.
AI isnt cost effective. The investors are going to want their money back very soon due to outside economic influences... they wont get it back and many of these AI pop ups are going to fold. the rest are going to scale back and jack up prices.
>AI isnt cost effective. The investors are going to want their money back very soon due to outside economic influences... they wont get it back and many of these AI pop ups are going to fold. the rest are going to scale back and jack up prices.
Nothing stopping us from having cake and eating. Open AI could fall over, and we would still have all the publicly available models kicking around.
its not the I/O that is costing.. its the hardware to push them, and the power to push the hardware...
oh and the companies themselves that are pulling in mountains of debt to build themselves...
Personally this is what I’m hoping for. Stories I read about services sold as AI turning out to be minimum wage workers tells me that as much as everyone thinks this is the dawn of a new age of hyperintelligent machines we haven’t gotten as far as we wanted to as fast as we wanted, or hoped.
Given that incredibly capable models can be run on fairly low cost hardware, how will this really change anything?
It's not AI. It's corporations, a much more dangerous form of non-human person. The actual AI bots aren't doing this abusive HTTP request spamming that causes services to be infeasible to run due to resource usage. It's coming from normal HTTP request software with no AI. And it's happening due to the perverse incentives of corporations and their intrinsic lack of liability. They truly are the worst legal persons.
We are many, search engines are the mean to discover things because even with usenet it's impossible for a human to discover via URLs and links enough information on the web, that's the real revolution: links are useful but not enough. Search engines are the best tool we have had so far to find knowledge around the web, now LLMs try to surpass traditional search engines milking knowledge from web contents, like we have many articles about wildfires in a region, but let's say not one about wildfire trends in that region, an LLM could try to spot a trend milking all articles in a significant timeframe. The Conrad Gessner's Biblioteca Universalis dream.
So well, LLMs do not kill the web, eat it. We are still almost the sole valid source of data for LLMs.
What really killed the web are social networks as proprietary walled gardens instead of an open Usenet with a web companion for stuff to be preserved for posterity or too long/complex for a mere post. What killed the web is the fact that ISPs do not offer an open homeserver instead of a closed box called "router" even if it's a limited homeserver. With an open version, with IPv6, anyone could buy a domain name and publish from his/shes own iron a blog with a ready-to-write software, with automatic RSS feeds, newsletters etc. If we give such tool to the masses the original web will be back but it would mean free speech and giants/politicians etc have free speech preferring ways to master public topics through their platforms to hide from most stuff they dislike and push ideas they like...
Search engine indexes being turned into copyright enforcement levers also significantly killed the net as it created scarcity in info dissemination for the sake of maintaining info asymmetry.
Go ahead and try to find JLG equipment/service manuals on the open net anymore. I'll wait.
Those rental boom lifts and so on? Are these it?
https://www.google.com/search?q=site%3Acsapps.jlg.com+filety...
https://www.google.com/search?q=site:csapps.jlg.com+filetype...
they are anyway needed and we also have YaCy as an example, and other distributed search solution. The point is that most do not participate so only commercial one get enough resources to be useful.
Problem #1 - to "save it", you first have to define the idealized and/or snapshot-in-time web that you want to save. Don't expect much agreement here, especially on the details.
Problem #2 - if you aren't the Emperor of Earth or some such, how could you make your ideal web stable over time, in today's world?
Think it’s a done deal.
I have a little bit of hope for semi independent operations though. Things like hn or lemmy that were never really ad supported anyway and have some distance from the enshitification trend
AI inference can save it because it can be used to make tools that reverse "enshittification" Cloudflare slams the door in the the face of this "exit".
To use a physical metaphor. Walled cities. Guarded wagon trains. Keep the barbarians at arm's length.
AI is just a one more tool that increasing wealth inequality and over monetization/unregulated capitalism are exploiting that makes everything it touches worse. It's not the first tool, it won't be the last.
I think, the web was killed before by human slob search engines can't or won't filter. Now we find out, a little longer prompt in an AI chat returns better results. So what?
But for the existence of YouTube, I think the ad business killed the internet.
I am still having a hard time seeing how “content” creating companies really have anything legally to stand onto on, short of the system simply making it up on the fly to keep the whole system from collapsing.
If the arguments of content creators is valid, as I understand them being made, then those content creating entities should also have been paying the people who created the content some form of “royalties” too every time someone sees their content, right?
Further extending that argument, the likes of artists and authors and even anyone who went to a university, especially a private one, should owe those entities “royalties” for the knowledge they keep reusing all their life, right?
Short of people doing already illegal things like hack servers instead of simply paying for a service to gain access to the “content”, I don’t see any way this is a legitimate argument unless we want to upend the whole foundation of the whole system of society, or at least create an unsustainable inconsistency and conflict in the system that will eventually destroy itself.
To preempt a counter; if scrapping is illegal and not allowed, what if an AI company simply employs an army of humans to copy paste the information into new files, you know, like many university students may do for notes?
What am I missing?
Start to regulate the technical protocols to access the AI prompts.
Like regulated noscript/basic (x)html interop. Or 'curl' based simple APIs.
Basically, if the whatng cartel web engines are not anymore required to access and use "AIs", things will start to significantly move.
I don't get why the articles behind paywalls are shared here.
they killed the web, not the AI.
One of the things I've been wondering about with the 'digital detox' trends or one of the younger generations getting dumbphones instead of smart, is why haven't the papers found some way of turning back the clock to explore capitalizing on that when it's supposedly hard to sell news now. 24 hour news is decades old at this point and the constant firehose of events from every location on the globe is tiring especially if only a tiny fraction is directly relevant to you. I'd be interesting if they could make a more attractive 'news/analysis product' like a newspaper or the evening news broadcast which is distinct from what is readily available from all the other sources.
Any day now some smartass is going to start selling analogue newspapers like hotcakes. Speaking of, I wonder how much I can get neohipsters to pay for hotcakes on the street. Slapjacks, I'll call them . . .
So a recommendation engine hooked up to news articles. Google News already exists.
It's the other way around. Paywalls are a result of the web and the Ad companies which power it killing the revenue models of publishers. AI, which steals even more and repackages their content, will make it worse.
What I don't understand is why nobody has actually figured this out. I read a lot of different news sources. I'd have no problem paying a reasonable amount of money to read an article I came across in the Podunk Picayune that looked of interest, but no, I have to sign up for a subscription. So they get nothing.
Yeah, just share the archive link directly.
Canonical links are preferred, per guidelines. Paywalled sites that are otherwise on topic with a workaround are usually allowed.
https://news.ycombinator.com/newsguidelines.html
https://news.ycombinator.com/newsfaq.html
Totally agree, here is something is use to read without paywalls https://archive.ph/2021.09.10-164025/https://www.bostonglobe...
Because paywalls are optional, at least for a crowd such as the HN crowd. Information wants to be free.
No. All great things come to an end - artistic movements, cultural, nations, etc etc - the end of the internet is now.
The internet is cables and other hardware, and protocols, none of which is going anywhere. The Web, an internet application, seems to be dying, and certainly newsgroups and other internet applications have also died, but the internet itself isn't dead or dying. In fact, it's growing as the global rollout of broadband continues and the unconnected get connected.
the protocols are going if they are not enforced by a google. google was the "US" that gave you the post-WW2 "peace" on the web. The big players only follow "your" standards when it suits them.
consider mobile apps on android and apple. they are an example of internet without browser. you only get widely used standard browsers if big money sees a moneymaking point in having them.
paywall can surely save it
It's still not as annoying as the assorted influencers who repeat The Economist headlines and articles back at me
Anyway this article is about AI replacing web search, not "killing the web" which I would take as it somehow deleting or overwriting content on existing webpages. Or generating so much spam as to make the web unusable for the average person.
Large sites that can't exist without "traffic" already killed the web a long time ago. A paywall is the proper solution, not ads in content and content in ads. That means you will have lower traffic, it doesn't mean you are being killed. It just means you stopped assaulting passersby who are linked to your site.
> Anyway this article is about AI replacing web search, not "killing the web"
Indeed, exaggerating title. But we all have to get the idea the web is really dying, so we give up working on it. We have to get that idea because the genie of the web is already out of the bottle for 30+ years. That stuff is going nowhere. The open web is a hindrance for big businesses. Big business wants to keep internet infrastructure to push apps, AI and what not, but does not want to keep the open web.
the standardized web we have had until now, was policed by google so they could harvest ad revenue from us. with no gorilla to encorce such standard, the web will balkanize as it's done before.
[dead]
[dead]
[flagged]
Just ignore the platforms. Use RSS on a body of self-curated websites/bookmarks. Click to read the articles and essays on their own domains (show the creators some love by doing that), and click around over there on that other domain.
I built my own system for that, but I know for sure this is possible with off-the-shelf (open source) software.
It takes some time to get used to this. No saturated video thumbnails, no infinite scrolling, no notifications. It's slower and feels more boring in the beginning. But it becomes a blessing very soon, when you go back to LinkedIn's feeds or Youtube's algo grid after a month and it feels like a punch in the stomach.
I used to be a heavy user of RSS, back in the Google Reader days. I loved it for following a wide array of different blogs. I'm not really sure why I stopped with rss - I switched to viable alternatives to Google reader when it died.
Recently I've been keen to get back into this way of using the web, because I have evidently been sucked into scrolling on the platforms until the algorithms give me something I want to see.
The other day, one of my favourite web dev blogs (and one of the only blogs I actually seek out) created this fantastic compendium of Web Performance resources and blog links, along with an associated rss opml file. Surely this is the push I needed to get back to the glory of the web.
https://infrequently.org/links/
But I definitely need to put in the effort to discover other eclectic blogs. I really miss reading long, authentic things on diverse topics
I never stopped using RSS. Went Google Toolbar > JetBrains Omea > Google Reader > Feedly. Been on Feedly since 2013. Highly recommend it.
Nice, thanks!
I think OPML is underrated and the combination of RSS (Really Social Sites) and OPML (Other People's Meaningful Links) could give the open web a resurrection as the social media of choice for curious people.
Right now, I'm working on integrating more and more OPML functionality into my RSS software. I envision a quick way of exploring and discovering new links/feeds from sites/feeds that I already follow.
> RSS (Really Social Sites)
Rich site syndication.
I have to inform you that it has a new name.
I have to inform you that it doesn't.
RDF Site Summary.
I am an RSS user but it is pretty frusterating these days being one. All of the I guess "first tier" sort of sites you'd really want an RSS feed for don't have one any more or offer a truncated one that forces you onto the platform (yes I roll my own morss, doesn't always get the content). You are left with sort of second tier news websites that pollute their feed with reposted AP content you might even see on several same feeds you follow.
And the biggest issue is that no one is starting a new site and implementing RSS. Seems like for a lot of RSS feeds I follow, the only reason they still exist is because the webmaster has not yet culled the service for whatever reason; like some of these links are found on vestigial web pages that look like 2007 internet whereas the rest of the site is modern.
And it makes sense why RSS is dying. It is a huge free bone tossed to the community. You don't see free bones tossed out anymore without a string attached to pull you back into some profit making angle. Everyone wants you on their site so they can serve you ads. They don't want you using a feed reader and getting that content without having to see an ad.
I have to agree with you. Completely.
On one hand I think it's a shame and I do miss feeds on certain (big) websites, but on the other it makes me appreciate the small web or indie web or just open web more.
Feels like rehab after two decades of 'social media'. But the open web is the ultimate form of social media itself, if you'd ask me. I plea for a name change of RSS to Really Social Sites. I already started calling it like that in my own software.
A ton of good discussion of things has left Reddit/Facebook/Twitter for all the obvious reasons and gone to Discord because of discoverability is low, so is discovery by trolls and AI scraper bots and plagiarists.
Which is great - if you have the invite and like the Discord UI.
It really sucks if you'd prefer to follow RSS or longer-form in general.
God, having complex conversations on WhatsApp is a pain, doing it on discord or any real time chat …
Ignoring is not how it works. Internet is a basically huge social circle, if not enough people got on broad, a site can die out really quickly. I've observed quite few examples of small community closed down because no one was there anymore, some websites that I loved as a child no longer exists because of this reason too, gone with it is all the content they once hosted.
Here's the problem:
1. Software/Infrastructure have a cost: If you want to self-host, there's a consistent dread of maintaining things. It wears you down, slowly maybe, but eventually.
2. The problem of discovery: Back to the past, people used to sharing links and resource manually, often on a forum ("forum life", i call it). But now days people are more rely on platform recommendations (starts from "Just Google it"). If your content/link is not recommended, then you can't reach far. Also, people now days really hates registration (and memorizing/recording account/password), and they will not even try to use "strange" websites.
3. Government regulation: The government pushing laws upon laws that could restrict self-hosting content, by either making self-hosting difficult, or forcing websites to self-censor (which most personal sites just don't have enough admin to do).
4. Some people who has the capability and know-hows on solving the problem are "solving" it the wrong way. Instead of creating systems that modern users would love to use, they tries "being back the old way" so do speak, but not giving any consideration on why people abandoned "the old way" in the first place. The software they created maybe even quite hostile to regular non-tech-savoy people, but hey at least they themselves thinking it's cool.
There are few projects gets it right, like Mastodon, and maybe Blue Sky etc. But, then these project still don't earn a lot of money and political capital, meaning it still can't escape the point 1 above and maybe point 3 as well.
Over all, I think it's less that the platforms exploiting the Internet, it's mainly that most people just "moved on" to what could make their life easier. Internet is a tool after all.
P.S. If someone wants to solve the social media over-monopolization problem, I'd recommend that you make sure you're "user forced", user, user, user, regular old man/woman John/Marry Doe user. That's how you create social circle/network effect and that's how you grow and sustain.
Freetube is a way to achieve this with YouTube. You just get RSS feeds, you avoid endless scrolling.
https://freetubeapp.io/
I worry that AI/bot presents as a desincentive for proper RSS distribution. Authors may not don't want to provide easy access to their content by bots. Maybe paywalling? Maybe proof of work solves this?
That's something the few can do, but not the many.
As open source improves at user onboarding, and user experience, there might be a chance.
You're using the wrong tools to browse the web if it seems that is the case.
The weird, creative, bordering on unhinged part of the web is still very much around and alive. It's just that you need to depart from the major social media sites and search engines if you want to find it again.
Delete all social media immediately. It’s the equivalent of Neo unplugging himself, taking these tubes out of his throat.
Well said. There's a good search engine for that, maybe you've heard of it?
https://marginalia-search.com/
;)
Well as it happens...
Hahaha
I’ve tried Marginalia about… probably 10 times, at this point? Every time I want niche search results. I haven’t found an interesting site through it, yet.
I love the concept and want it to work! I pay for Kagi; I value search.
The explore mode[1] is probably the tool you're looking for if you're just looking for something interesting / demonstration that the weird web still exists.
[1] https://marginalia-search.com/explore
This is a step in the right direction. Thanks for this.
Bring
Back
Web
Rings
(But seriously, I think I would love to rat-hole down interesting web rings.)
There's a web ring revival happening, ironically by people who are too young to actually remember them.
Other networks are web ring prone, too, like Tor and I2P. Lot of web rings found on either.
The problem of the pre-platform Web is the difficulty of discovery; your interesting content will have but a few readers.
The lure of platforms, like Twitter, or, well, HN, is that your content can potentially be seen by "everyone". Going viral is fun but not that important; being seen by the right people you never knew, or never had a hope to grab attention of, is much move valuable. This leads to much stronger cross-pollination.
(Spam is a problem here, but spam is also a problem in similar biological systems; blooming plants release tons of pollen, and then tons of seeds, most of them fruitless.)
Discoverability wasn't much of a problem for Google's first half decade. We had great content spread all over from millions of sources easily discovered with a simple search engine. Then Google IPO'd and Facebook showed up and everything went to shit. And that ignores other great discovery tools from directories all the way back to web rings. The web from 1995-2005 really was pretty good for content and discovery both. Today it's a few mega corporations with walled gardens who control all of the discovery and are loathe to let users engage with the web outside of their walls.
The thing that stops me pursing this idea though is how do you verify contributors to this new internet aren’t platforms/businesses?
Where do you draw the line?
Who gets to draw the line?
This is an incomplete thought, but a friend of mine has this idea around reputation built through a sort-of key signing. You get a key, your friend gets a key, you sign each other's keys. The key can serve as an indicator of trust, or validity that an individual's contributions are meaningful (or something). And if your friend suddenly turns into a corporate shill, you could revoke that trust. And if the people haven't established their own trust with that person, their trust goes when yours does. Transitive trust.
It obviously has some flaws, and could be gamed in the right circumstances, but I think it's an interesting idea.
Isn't this just a standard pgp web of trust?
https://en.m.wikipedia.org/wiki/Web_of_trust
Sounds like following people on a social media platform and only reading posts from in your network. Which is exactly how most people I know use Bluesky.
It works better than Twitter's algorithmic feed but it's still not foolproof because not everyone has the same idea of what sort of content they are willing to trust/ track.
Anything that requires the end user to internalize PKI is dead on arrival.
A) The interface won't get intuitive enough.
B) The asshats will still find a way in.
C) Ain't nobody ever met someone in the real world and gone "Yo dawg, what's your public key?"
Encryption is just a machine that turns already hard problems into key management problems.
A plug-in. Trusted users thumbs up/down sites and ratings are recorded in a database. The plug-in visually differentiates shite links (according to database) so others can avoid clicking on them (or they can hide them altogether).
A kind of PiHole for just shitty SEO sites.
Why that line in particular? It seems not to be about the quality of the content. Part of the issue is that businesses were advised to produce useful content, but the motivation for doing so is disappearing. A net negative, surely?
Even if you could do it perfectly (distinguish "authentic people" from slop merchants) the same old actors will do the same old things as long as the incentives are there. They will just wear "real people" like skin suits. Almost worse :/
It will never happen as long as Google is able to gatekeep the Internet with its search and browser. Even if you could find enough power users to break out and create something that hits critical mass, user-powered indexes don’t scale. Whomever swoops in to fix the problem immediately becomes the new Google.
You can look for forums on tor. It's just not very usefull unless you are a drug addict or criminal.
LLM reply. At least get rid of the dashes, come on.
I thought he was doing a bit
> one that puts users, privacy, and real expression first
users aint that special.
No one noticed the parent post is LLM slop?
Spams of groups of threes (open, chaotic, full of real voices - filtered, throttled, and buried - users, privacy, real expression)
It's not just X - it's Y type of sentence structure Vapid marketing style writing that has no real substance (Maybe it’s not about saving the old web. Perhaps it’s time to build a new one)
Of course, there are emdashes too, they may not betray LLM alone as they exist in literature and a minority like to use them in internet comments but when they are present along with other signs of slop they are still a strong tell, particularly when they are numerous.
Is this satire? or trolling? it is concerning everyone replies to it as if there had been human thought behind this drivel.
Typically LLMs don't put spaces between em dashes and the words that surround them—which is the correct orthography, I should point out. Humans often put spaces around them when they shouldn't, like in the example you quoted. I don't know if it's AI or not, but if you ask an AI to use a sentence with an em dash in it, it won't include spaces.
How can "correct" have any meaning in style-preference territory? Chicago doesn't put spaces around dashes. AP does. Oxford follows Chicago, and the rest of the UK uses spaced en dashes instead. For typewriting -- and, by extension, typing -- this well-established convention appears (attested in Garner's Usage, if you're wondering). Chicago always spaces ellipses . . . and AP doesn't, no matter how ugly it looks next to a period. ... Who's correct?
I've seen some variation in such formatting/style from LLMs, so that can't be totally reliable. Doesn't need to be, though. LLMs tend to subject dashes to a distinct flavor of abuse:
- In all the places they don't belong; nearly all can be replaced with a comma, a period, or nothing at all, with no loss to style or tone
- In few of the places they might belong, and conspicuously absent whenever there's a parenthetical phrase to offset
- Obnoxiously dramatic, excessive, and pointless
Don't forget semicolon. Normal people don't use that.
I don’t know that I’d call myself normal, but I use semicolons regularly, though infrequently.
If you're on hackernews, you aren't normal.
Semicolons are fine so long as you know how to use them.
[dead]
I didn’t notice because I unconsciously skim over slop-looking comments without evaluating whether it’s human-written or not, and only read the more interesting comments.
It already exists, it's called the Gemini protocol: https://geminiprotocol.net/
Gemini is a first step in the right direction but it doesn't really have an immune system protecting it to what happened to the web. Meaning what happen when Gemini get so successful that idiots and bots start flowing into it.
My guess is Gemini would be a good starting point to experiment with bold ideas:
- We need some form of web of trust system. As most of the time we are not interested to be exposed or interact with agents more than 1 or 2 hops away of what we trust.
- Sorry but we will need to form of "nano" payment system in place. If a bot want to consume resources on my server, maybe fine but they will have to at least pay for it. We already pay today for every websites using proof-of-work challenges, and I would rather give that money to the person hosting.
I moved my site to Gemini, finished middle last year.
Clients:
https://en.wikipedia.org/wiki/Gemini_(protocol)#Software
Some links to find content:
gemini://sdf.org
gemini://gem.sdf.org
gemini://gemi.dev/xkcd/
gemini://warmedal.se/~antenna/
gemini://gemini.circumlunar.space/capcom/
gemini://skyjake.fi/~Cosmos/view.gmi
gemini://calcuode.com/gmisub-aggregate.gmi
gemini://tinylogs.gmi.bacardi55.io/
gemini://sl1200.dystopic.world/juntaletras.gmi
gemini://tilde.team/~khuxkm/leo/
gemini://raek.se/orbits/space-elevator/
gemini://fediring.net/
Do you have readers, or is it just for you?
I do not understand. Clients link above has a list of clients to read gemini sites.
They are asking if your site has people viewing/reading it after you moved it to Gemini or if it is just you reading it at that point.
>They are asking if your site has people viewing/reading it after you moved it to Gemini
I have no idea. I do not keep track or log any visitors.
> The web as we knew it — open, chaotic, full of real voices already gone. Free
Commented on a site whose top pages are curated manually....
Web is obsolete. Going forward AI is the first and maybe last step to getting information about a topic. No need to sift through ads, forum drama, clickbait blog posts, comments etc… just straight compiled information into your brain as quickly as possible. Yea sometimes it’s wrong, but sometimes things you find on the wild web are wrong anyway, just deal with it.
I find that when people pine for the old web, what they’re really asking for is some way to connect to other people and see things that people have written or made just for fun in a genuine way, without it being performative, derivative or for other motivations.
In theory social media should have been this, but people’s constant need to accumulate validation or tendency to produce meme-like content adversely affects the quality of their output, giving it a machined style feel that rarely feels genuine or true to their human nature. Instead of seeing people’s true personalities, you see their “masks”.
Thus the issue is not rooted in a technical problem but rather a cultural one: people no longer naively share things that don’t fuel their ego in the most perfect way.
Until that same AI starts shilling ads and certain viewpoints peddled by their owners in the output... This will happen 100% (ads, the other bit has already happened). The economics of all of these models doesn't work as is. There will be a major squeeze down the line.
Some of us have dipped our toes in local LLMs. To be sure, the ones I can run on my hardware always pale when compared to the online ones. But perhaps in time the ones you can run locally will be good enough.
Or perhaps an Apple or Kagi will host an LLM with no built-in monetization skewing its answers.
you csn run the model, but someone with vastly bigger resources need to train it.
Sure. Hopefully decent pre-ad-injected models will still be around.
most ai tools run on the web, so the question is ill-posed. Not worth opening the article.
I'd say nothing kills the web more than hiding the “reject all cookies” button and covering the whole page with a popup until you accept. So I think we’re safe for now.