mythz 4 hours ago

AI moves so fast that Vibe Coding still has a negative stigma attached to it, but even after 25 years of development, I'm not able to match the productivity of getting AI to implement the features I want. It's basically getting multiple devs to set out and go do work for you where you just tell them what you want and provide iterative feedback till they implement all the features you want, in the way you want and to fix all the issues you find along the way, which they can create tests and all the automated and deployment scripts for.

This is clearly the future of Software Development, but the models are so good atm that the future is possible now. I'm still getting used to and having to rethink my entire dev workflow for maximum productivity, and whilst I wouldn't unleash AI Agents on a decade old code base, all my new Web Apps will likely end up being AI-first unless there's a very good reason why it wouldn't provide a net benefit.

  • kaspermarstal 6 minutes ago

    We need a new term for LLMs actually solving a hard problems. When I help Claude Code solve a nasty bug it doesn’t feel like “vibing” as in “I tell the model what I want the website to look like”. It feels like sniping as in “I spot for Claude Code, telling how to adjust for wind, range, and elevation so it can hit my far away target”.

  • dust42 3 hours ago

    It just depends on what you are doing. A green field react app in typescript with a CRUD API behind? The LLMs are a mind blowing assistant and 1000t/s is crazy.

    You are doing embedded development or anything else not as mainstream as web dev? LLMs are still useful but no longer mind blowing and often produce hallucinations. You need to read every line of their output. 1000t/s is crazy but no longer always in a good way.

    You are doing stuff which the LLMs haven't seen yet? You are on your own. There is quite a bit of irony in the fact that the devs of llama.cpp barely use AI - just have a look at the development of support for Qwen3-Next-80B [1].

    [1] https://github.com/ggml-org/llama.cpp/pull/16095

    • koito17 an hour ago

      > You are doing embedded development or anything else not as mainstream as web dev? LLMs are still useful but no longer mind blowing and often produce hallucinations.

      I experienced this with Claude 4 Sonnet and, to some extent, gpt-5-mini-high.

      When able to run tests against its output, Claude produces pretty good Rust backend and TypeScript frontend code. However, Claude became borderline unproductive once I started experimenting with uefi-rs. Other LLMs, like gpt-5-mini-high, did not fare much better, but they were at least capable of admitting lack of knowledge. In particular, GPT-5 would provide output akin to "here is some pseudocode that you may be able to adapt to your choice of UEFI bindings".

      Testing in a UEFI environment is quite difficult; the LLM can't just run `cargo test` and verify its output. Things get worse in embedded, because crates like embedded_hal made massive API changes between 0.2 and 1.0 (the latest version), and each LLM I've tried seems to only have knowledge of 0.2 releases. Also, for embedded, forget even thinking about testing harnesses (which at least exist in some form with UEFI, it's just difficult to automate the execution and output for an LLM). In this case, you cannot really trust the output of the LLM. To minimize risk of hallucination, I would try maintaining data sheets and library code in context, but at that point, it took more time to prompt an LLM than handwrite code.

      I've been writing a lot of embedded Rust over the past two weeks, and my usage of LLMs in general decreased because of that. Currently planning to resume development on some of my "easier" projects, since I have about 300 Claude prompts remaining in my Zed subscription, and I don't want them to go to waste.

      • RealityVoid an hour ago

        > Also, for embedded, forget even thinking about testing harnesses (which at least exist in some form with UEFI, it's just difficult to automate the execution and output for an LLM).

        I think this doesn't have to be like this and we can do better for this. If LLMs keep this up, good testing infrastructure might become more important.

        • koito17 an hour ago

          One of my expectations for the future is the development of testing tools whose output is "optimized" in some way for LLM consumption. This is already occurring with Bun's test runner, for instance.[0] They are implementing a flag in the test runner so that the output is structured and optimized for token count.

          Overall, I agree with your point. LLMs feel a lot more reliable when a codebase has thorough, easy-to-run tests. For a similar reason, I have been drifting towards strong, statically-typed languages. Both Rust and TypeScript have rich type systems that can express many kinds of runtime behavior with just types. When a compiler can make strong guarantees about a program's behavior, I assume that helps nudge the quality of LLM output a bit higher. Tests then help prevent silly regressions from occurring. I have no evidence for this besides my anecdotal experience using LLMs across several programming languages.

          In general, I've had the best experience with LLMs when there's plenty of static analysis (and tests) on the codebase. When a codebase can't be easily tested, then I get much less productivity gains from LLMs. So yeah, I'm all for improving testing infrastructure.

          [0] https://x.com/jarredsumner/status/1944948478184186366

    • lifthrasiir 2 hours ago

      There aren't many things that LLMs haven't really seen yet, however. I have successfully used LLMs to develop a large portion of WebAssembly 3.0 interpreter [1], which surely aren't in their training set because WebAssembly 3.0 was only released months ago. Sure, it took me tons of guidance but it was useful enough for me.

      Even llama.cpp is not a truly novel thing to LLMs, there are several performant machine learning model executors available in their training sets anyway, and I'm sure llama.cpp can benefit from LLMs if they want; they just chose not to.

      [1] https://github.com/lifthrasiir/wah/

    • RealityVoid an hour ago

      > You are doing embedded development or anything else not as mainstream as web dev?

      Counterpoint, but also kind of reinforcing your point. It depends on the kind of embedded development. I did a small utility PCB with an ESP32, and their libs are good there is active community, they have test frameworks. LLM's did a great job there.

      On the other hand, I wanted to drive a timer and a pwm module and a DMA dma engine to generate some precise pulses. The way I chained hw was... Not typical, but it was what I needed and the hw could do it. At that, Claude failed miserably and it only led me to waste my time, so I had to spend the time to do it manually.

    • almostgotcaught 3 hours ago

      I've said it before but no one takes it seriously: LLMs are only useful if you're building something that's already in the training set ie already commodity. In which case why are you building it???

      • whiterook6 3 hours ago

        It's not that the product you're building is a commodity. It's that the tools you're using to built it are. Why not build a landing page using HTML and CSS and tailwind? Why not use swift to make an app? Why not write an AWS lambda using JavaScript?

      • philipp-gayret 33 minutes ago

        It's true, when I was working with LLMs on a novel idea it said sorry I can't help you with that!

      • mythz 3 hours ago

        "LLMs are only useful..."

        Is likely why no one takes you seriously, as it's a good indication you don't have much experience with them.

      • antonvs 2 hours ago

        The obvious point that you're missing is that there are literally infinite ways to assemble software systems from the pieces that an LLM is able to manipulate due to its training. With minor guidance, LLMs can put together an unlimited number of novel combinations. The idea that the entire end product has to be in the training set is trivially false.

      • becquerel an hour ago

        Because I'm getting paid to.

      • nurettin 2 hours ago

        Do you avoid writing anything that the programming community has ever built? How are you alive???

  • ffsm8 35 minutes ago

    The industry of "software" is so large... While I agree with web development going this route, I'm not sure about "everything else".

    You could argue that that's the bulk of all software jobs in tech, and you'd likely be correct... But depending on what your actual challenge is, LLM assistance is more of a hindrance then help. However creating a web platform without external constraints makes LLM assistance shine, that's true

    • killerstorm 4 minutes ago

      Well, there are certainly kinds of code LLMs would struggle with, but people generally underestimate what LLMs are capable of.

      E.g. Victor Taelin is implementing ultra-advanced programming language/runtime writing almost all code using LLM now. Runtime (HVM) is based on Interaction Calculus model which was only an obscure academic curiosity until Taelin started working on it. So a hypothesis that LLMs are only capable of copying bits of code from Stack Overflow shall be dismissed.

  • namanyayg 3 hours ago

    Exactly, Codex gpt-5-high is quite like sending smart devs. It still makes mistakes, and when it does they're extremely stupid ones, but I am now accepting the code it generates as throwable and I just reroll when it does something dumb.

divmain 5 hours ago

I have been an AI-coding skeptic for some time. I always acknowledged LLMs as useful for solving specific problems and making certain things possible that weren't possible before. But I've not been surprised to see AI fail to live up to the hype. And I never had a personally magical moment - an experience that shifted my perspective à la the peak end rule.

I've been using GLM 4.6 on Cerebras for the last week or so, since they began the transition, and I've been blown away.

I'm not a vibe coder; when I use AI coding tools, they're in the hot path. They save me time when whipping up a bash script and I can't remember the exact syntax, or for finding easily falsifiable answers that would otherwise take me a few minutes of reading. But, even though GLM 4.6 is not as smart as Sonnet 4.5, it is smart enough. And because it is so fast on Cerebras, I genuinely feel that it augments my own ability and productivity; the raw speed has considerably shifted the tipping point of time-savings for me.

YMMV, of course. I'm very precise with the instructions I provide. And I'm constantly interleaving my own design choices into the process - I usually have a very clear idea in my mind of what the end result should look like - so, in the end, the code ends up how I would have written it without AI. But building happens much faster.

No affiliation with Cerebras, just a happy customer. Just upgraded to the $200/mo plan - and I'll admit that I was one that scoffed when folks jumped on the original $200/mo Claude plan. I think this particular way of working with LLMs just fits well with how I think and work.

  • ramraj07 an hour ago

    Your post has inspired me to check them out. How do you use it, with their UI oe to power some other open source tool?

    Do you suggest that this thing is so fast its simpler now to quickly work on one thing at a time instead of the 5 background tools running in parallel which might have been a pattern we invented because these things are so slow?

Flux159 5 hours ago

Was able to sign up for the Max plan & start using it via opencode. It does a way better job than Qwen3 Coder in my opinion. Still extremely fast, but in less than 1 hour I was able to use 7M input tokens, so with a single agent running I would be able easily to pass that 120M daily token limit. The speed difference between Claude Code is significant though - to the point where I'm not waiting for generation most of the time, I'm waiting for my tests to run.

For reference, each new request needs to send all previous messages - tool calls force new requests too. So it's essentially cumulative when you're chatting with an agent - my opencode agent's context window is only 50% used at 72k tokens, but Cerebra's tracking online shows that I've used 1M input tokens and 10k output tokens already.

  • zaptrem 12 minutes ago

    They don't have prefix caching? Claude and Codex have this.

  • NitpickLawyer 3 hours ago

    > For reference, each new request needs to send all previous messages - tool calls force new requests too. So it's essentially cumulative when you're chatting with an agent - my opencode agent's context window is only 50% used at 72k tokens, but Cerebra's tracking online shows that I've used 1M input tokens and 10k output tokens already.

    This is how every "chatbot" / "agentic flow" / etc works behind the scenes. That's why I liked that "you should build an agent" post a few days ago. It gets people to really understand what's behind the curtain. It's requests all the way down, sometimes with more context added, sometimes with less (subagents & co).

dust42 2 hours ago

1000 tokens/s is pretty fancy. I just wonder how sustainable the pricing is or if they are VC-fueled drug dealers trying to convert us into AI-coholics...

It is definitely fun playing with these models at these speeds. The question is just how far from real pricing is 500M tokens for $50?

Either way the LLM usage will grow for some time to come and so will grow energy usage. Good times for renewables and probably fusion and fission.

Selling shovels in a gold rush was always reliable business. Cerebras is only rated at $8.1B as of one month ago. Compared to Nvidia that seems pocket change.

behnamoh 6 hours ago

If they don't quantize the model, how do they achieve these speeds? Groq also says they don't quantize models (and I want to believe them) but we literally have no way to prove they're right.

This is important because their premium $50 (as opposed to $20 on Claude Pro or ChatGPT Plus) should be justified by the speed. GLM 4.6 is fine but I don't think it's still at the GPT-5/Claude Sonnet 4.5 level, so if I'm paying $50 for it on Cerebras it should be mainly because of speed.

What kind of workflow justifies this? I'm genuinely curious.

  • nine_k 5 hours ago

    > What kind of workflow justifies this?

    Think about waiting for compilation to complete: the difference between 5 minutes and 15 seconds is dramatic.

    Same applies to AI-based code-wrangling tasks. The preserved concentration may be well worth the $50, especially when paid by your employer.

    • behnamoh 5 hours ago

      they should offer a free trial so we build confidence in the model quality (e.g., to make sure it's not nerfed/quantized/limited-context/etc.).

      • NitpickLawyer 3 hours ago

        You can usually use them with things like openrouter. Load some credits there and use the API in your preferred IDE like you'd use any provider. For some quick tests it's probably be <5$ for a few coding sessions so you can check out the capabilities and see if it's worth it for you.

        • behnamoh 3 hours ago

          openrouter charges me $12 on a $100 credit...

      • conception 4 hours ago

        A trial is literally front and center on their website.

  • cschneid 6 hours ago

    so apparently they have custom hardware that is basically absolutely gigantic chips - across the scale of a whole wafer at a time. Presumably they keep the entire model right on chip, in effectively L3 cache or whatever. So the memory bandwidth is absurdly fast, allowing very fast inference.

    It's more expensive to get the same raw compute as a cluster of nvidia chips, but they don't have the same peak throughput.

    As far as price as a coder, I am giving a month of the $50 plan a shot. I haven't figured out how to adapt my workflow yet to faster speeds (also learning and setting up opencode).

    • bigyabai 6 hours ago

      For $50/month, it's a non-starter. I hope they can find a way to use all this excess bandwidth to put out a $10 equivalent to Claude Code instead of a 1000 tok/s party trick I can't use properly.

      • typpilol 4 hours ago

        I feel the same and it's also why I can't understand all these people using small local models.

        Every local model I've used and even most open source are just not good

        • behnamoh 3 hours ago

          the only good-enough model I still use it gpt-oss-120b-mxfp4 (not 20b) and glm-4.6 at q8 (not q4).

          quantization ruins models and some models aren't that smart to begin with.

        • csomar 2 hours ago

          GLM-4.6 is on par with Sonnet 4.5. Sometimes it is better, sometimes it is worse. Give it a shot. It's the only model that made me (almost) ditch Claude. The only problem is, Claude Code is still the best agentic program in town and search doesn't function without a proper subscription.

  • NitpickLawyer 3 hours ago

    > What kind of workflow justifies this? I'm genuinely curious.

    Any workflow where verification is faster / cheaper than generation. If you have a well tested piece of code and want to "refactor it to use such and such paradigme", you can run n faster model queries and pick the fastest.

    My colleagues that do frontend use faster models (not this one specifically, but they did try fast-code-1) to build components. Someone worked out a workflow w/ worktrees where the model generates n variants of a component, and displays them next to each other. A human can "at a glance" choose which one they like. And sometimes pick and choose from multiple variants (something like passing it to claude and say "keep the styling of component A but the data management of component B"), and at the end of the day is faster / cheaper than having cc do all that work.

odie5533 3 hours ago

I find the fast models good for rapidly iterating UI changes with voice chat. Like "add some padding above the text box" or "right align the button". But I find the fast models useless for deep coding work. But a fast model has its place. Not $50/month though. Cursor has Compose 1 and Grok Code Fast for free. Not sure what $50/month gets me that those don't. I liked the stealth supernova model a lot too.

  • gardnr 3 hours ago

    GLM 4.6 isn't a "fast" model. It does well in benchmarks vs Sonnet 4.5.

    Cerebras makes a giant chip that runs inference at unreal speeds. I suspect they run their cloud service more as an advertising mechanism for their core business: hardware. You can hear the founder describing their journey:

    https://podcasts.apple.com/us/podcast/launching-the-fastest-...

  • bn-l 3 hours ago

    Composer and grok fast are not free.

niklassheth 6 hours ago

This is more evidence that Cognition's SWE-1.5 is a GLM-4.6 finetune

  • prodigycorp 6 hours ago

    Can you provide more context for this? (eg Was SWE-1.5 released recently? Is it considered good? Is it considered fast? Was there speculation about what the underlying model was? How does this prove that it's a GLM finetune?)

    • NitpickLawyer 3 hours ago

      People saw chinese characters in generations made by swe-1.5 (windsurfs model) and also in the one made by cursor. This led to suspicions that the models are finetunes of chinese models (which makes sense, as there aren't many us/eu strong coding models out there). GLM4.5/4.6 are the "strongest" coding models atm (with dsv3.2 and qwen somewhat behind) so that's where the speculation came from. Cerebras serving them at roughly the same speeds kinda adds to that story (e.g. if it'd be something heavier like dsv3 or kimik2 it would be slower).

      • prodigycorp 2 hours ago

        Really appreciate this context. Thank you!

  • nl 5 hours ago

    Not at all. Any model with somewhat-similar architecture and roughly similar size should run at the same speed on Cerabras.

    It's like saying Llama 3.2 3B and Gemma 4B are fine tunes of each other because they run at similar speeds on NVidia hardware.

alyxya 6 hours ago

It would be nice if there was more information provided on that page. I assume this is just the output token generation speed. Is it using speculative decoding to get to 1000 tokens/sec? Is there lossy quantization being used to speed things up? I tend to think the number of tokens per second a model can generate to be relatively low on the list of things I care about, when things like model/inference quality and harness play a much bigger role in how I feel about using a coding agent.

  • cschneid 6 hours ago

    Yes this is the output speed. Code just flashes onto the page, it's pretty impressive.

    They've claimed repeatedly in their discord that they don't quantize models.

    The speed of things does change how you interact with it I think. I had this new GLM model hooked up to opencode as the harness with their $50/mo subscription plan. It was seriously fast to answer questions, although there are still big pauses in workflow when the per-minute request cap is hit.

    I got a meaningful refactor done, maybe a touch faster than I would have in claude code + sonnet? But my human interaction with it felt like the slow part.

    • alyxya 6 hours ago

      The human interaction part is one of the main limitations to speed, where the more autonomous a model can be, the faster it is for me.

seduerr 3 hours ago

It’s just amazing to have a reliable model at the speed of light. Was waiting for such a great model for a long time!

lordofgibbons 5 hours ago

At what quantization? And if it is in fact quantized below fp8, how is the performance impacted on all the various benchmarks?

elzbardico 5 hours ago

50 dollars month cerebras code plan, first with qwen-420, now with glm, is my secret weapon.

Stalin used to say that in war "quantity has a quality all its own". And I think that in terms of coding agents, speed is quality all its own too.

Maybe not for blind vibe coding, but if you are a developer, and is able to understand the code the agent generates and change it, the fast feedback of fast inference is a game changer. I don't care if claude is better than GLM 4.6, fast iteractions are king for me now.

It is like moving from DSL to gigabit fiber FTTH

gatienboquet 6 hours ago

Vibe Slopping at 1000 tokens per second

  • mmaunder 5 hours ago

    Yeah honestly having max cognitive capability is #1 for me. Faster tokens is a distant second. I think anyone working on creating valuable unique IP feels this way.

    • conception 4 hours ago

      This us where agents actually shine. Having a smart model write code and plan is great and then having cerebra’s do ask the command line work, write documents effectively instantly and other simple tasks does sped things up quite a bit.

renewiltord 5 hours ago

Unfortunately for me, the models on Cerebras weren’t as good as Claude Code. Speedy but I needed to iterate more. Codex is trustworthy and slow. Claude is better at iterating. But none of the Cerebras models at the $50 tier were worth anything for me. They would have been something if they’d just come out but we have these alternatives now.

  • elzbardico 5 hours ago

    I don't care. I want LLMs to help with the boring stuff, the toil. It may not be as intelligent as Claude, but if it takes care of the boring stuff, and it is fast while doing it, I am happy. Use it surgically, do the top-down design, and just let it fill the blanks.

    • renewiltord 4 hours ago

      Give it a crack. It took a lot of iteration for it to write decent code. If you figure out differences in prompting technique, do share. I was really hoping for the speed to improve a lot of execution - because that’s genuinely the primary problem for me. Unfortunately, speed is great but quality wasn’t great for me.

      Good luck. Maybe it’ll do well in some self-directed agent loop.