With recent advancements in AI technologies such as Code Interpreter and LongNet, it seems that we are getting closer to a point where AI can write any software, including an operating system. Surprisingly, predictions from platforms like Metaculus suggest that this could become a reality as early as 2029 [1]. The combination of GPT-5, access to factual information through search, the ability to understand and keep track of vast amounts of code with LongNet, and the ability to debug and refine the code until it achieves the desired outcome with Code Interpreter, could enable AI to achieve tasks previously thought unimaginable by a single entity.

It would be interesting to explore alternative methods of assessing this progress, such as analyzing the exponential growth of GitHub issues opened versus solved over time, to determine if there is an expected convergence in the near future.


  1. Metaculus: AI Programming - 50k Lines of Code ↩︎

  • AFK BRB Chocolate@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Wouldn’t that depends on an LLM having several OS’s source code to train on? It’s hard to believe one could do it based on the open source applications they’re trained on now.

    • philm@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Well it was hard to believe it’s able to have “abstract thinking” not long ago (not yet complex, but it’s certainly able to do it). Since there’s a lot of literature for this topic out there, I don’t doubt it’ll be able to do it at some day.

      • AFK BRB Chocolate@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        “Abstract thinking” might be a little charitable. Definitions of that phrase are generally like this one:

        Abstract thinking, also known as abstract reasoning, involves the ability to understand and think about complex concepts that, while real, are not tied to concrete experiences, objects, people, or situations.

        LLMs don’t really “understand” anything, and they don’t “reason.” They’re more like pattern recognition engines. They sift through huge amounts of text to build a model of what a conversation like like, and to match up what you ask with the kind of things a “proper” answer looks like. So if you ask it to make a recipe for something specific, is going to give you one, and it will look reasonable because it’s based on all the recipes it’s seen before (the web is chock full of them), but it very well may not work at all because it doesn’t actually know anything about cooking or how the ingredients work or taste. It’s just able to distill that most recipes for that kind of thing usually have some number of eggs, some amount of flour, etc.

        That’s the way they are with everything. They are incapable of original thought because there’s no thinking happening. So tying this back to the topic, if you ask one to write an OS, it’s going to give you code, but if there are no OSs that it’s been trained on, the chances of that code being close to functional are close to zero because operating systems do a lot of things that other applications don’t.

        And most OSs aren’t open source, they’re proprietary, so the chances of LLMs being able to train on a number of them anytime soon are pretty small.

        • philm@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Yeah I know how Transformers (the basis of modern LLMs) are working. Basically it’s just predicting the next word based on a sequence of previous words (but the how is really interesting).

          They are incapable of original thought because there’s no thinking happening

          I would be careful though with this statement, as this is getting slightly philosophical: Are we having original thoughts, or are we also just “predicting the next word”.

          the chances of that code being close to functional are close to zero because operating systems do a lot of things that other applications don’t.

          Right now definitely.

          But just iterating on the same code a few times (often just one time) often spits out high quality code that’s not just running, but is often the most optimal solution for the given problem (which can be of an entirely new domain).

          I’m not saying that it’s really able to do complex thinking, I barely use it for my programming. But it’s certainly doing basic abstraction (not just regurgitating code it has seen at some time), which is fascinating. Real original thought (research) may be a few years away, but I wouldn’t be sure if it’s not able to do that either, basically research is often also “just” incremental, building/combining old research, with a little bit of stochastical guessing in the right direction it may be able to find new innovative solutions i.e. have original thought.

          • AFK BRB Chocolate@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I would be careful though with this statement, as this is getting slightly philosophical: Are we having original thoughts, or are we also just “predicting the next word”.

            In, if humans aren’t having original thoughts, how do you explain any advancement? The first use of tools? The development of language it writing? Agriculture? Architecture? The printing press? Lemmy? That seems like such a strange argument.

            Real original thought (research) may be a few years away, but I wouldn’t be sure if it’s not able to do that either, basically research is often also “just” incremental, building/combining old research, with a little bit of stochastical guessing in the right direction it may be able to find new innovative solutions i.e. have original thought.

            Well, maybe, but I think that would be a fundamentally different approach/algorithm than anything we have today.

            I never said that AI will never be able to write an OS, just that it’s not anywhere as close it might seem based on an LLM’s current ability to create code.

            • philm@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              1 year ago

              That seems like such a strange argument.

              What I’m trying to say, is that “predicting the next word” could also include having original thoughts, based on what happened previously. Something like we need food, plants grow, they produce seeds, maybe the seeds can be used to produce new plants, next predicted “word”: use the seeds and grow plants.

              Well, maybe, but I think that would be a fundamentally different approach/algorithm than anything we have today.

              yeah could and will likely be, but the current unsupervised approach is quite effective and has not yet reached its limit. AI (LLM) research is much more incremental than you think, the last “ground-braking” paper was “Attention is all you need” (Transformer paper) and even that combines a lot of techniques of previous algorithms (in a “slight new” configuration).

  • ArkyonVeil@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Difficult to say right now. While it’s bragged that AI can already “create” games and “webpages”. Current approaches rely heavily on human prompting and trial and error. Plus said dev AIs work best when creating common, simple, short projects. The AI will just auto complete stuff, and if its wrong, well it can’t tell the difference. Actual programs created by humans are multitudes of machines working together in perfect sync. It involves progressive iteration and refactor, as well as requiring many types of languages, data, images, sound, API use, organization, planning, and references (often from closed source programs) if it has any hopes of working.

    Something as ambitious as an OS? Given the size of the task, you might as well wait for the singularity when all bets are off.

  • Hector_McG@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    1 year ago

    Round about the same time advances in genetic engineering allows us to freely mix avian and porcine DNA in viable hybrids.