With recent advancements in AI technologies such as Code Interpreter and LongNet, it seems that we are getting closer to a point where AI can write any software, including an operating system. Surprisingly, predictions from platforms like Metaculus suggest that this could become a reality as early as 2029 [1]. The combination of GPT-5, access to factual information through search, the ability to understand and keep track of vast amounts of code with LongNet, and the ability to debug and refine the code until it achieves the desired outcome with Code Interpreter, could enable AI to achieve tasks previously thought unimaginable by a single entity.

It would be interesting to explore alternative methods of assessing this progress, such as analyzing the exponential growth of GitHub issues opened versus solved over time, to determine if there is an expected convergence in the near future.


  1. Metaculus: AI Programming - 50k Lines of Code ↩︎

  • AFK BRB Chocolate@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I would be careful though with this statement, as this is getting slightly philosophical: Are we having original thoughts, or are we also just “predicting the next word”.

    In, if humans aren’t having original thoughts, how do you explain any advancement? The first use of tools? The development of language it writing? Agriculture? Architecture? The printing press? Lemmy? That seems like such a strange argument.

    Real original thought (research) may be a few years away, but I wouldn’t be sure if it’s not able to do that either, basically research is often also “just” incremental, building/combining old research, with a little bit of stochastical guessing in the right direction it may be able to find new innovative solutions i.e. have original thought.

    Well, maybe, but I think that would be a fundamentally different approach/algorithm than anything we have today.

    I never said that AI will never be able to write an OS, just that it’s not anywhere as close it might seem based on an LLM’s current ability to create code.

    • philm@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      That seems like such a strange argument.

      What I’m trying to say, is that “predicting the next word” could also include having original thoughts, based on what happened previously. Something like we need food, plants grow, they produce seeds, maybe the seeds can be used to produce new plants, next predicted “word”: use the seeds and grow plants.

      Well, maybe, but I think that would be a fundamentally different approach/algorithm than anything we have today.

      yeah could and will likely be, but the current unsupervised approach is quite effective and has not yet reached its limit. AI (LLM) research is much more incremental than you think, the last “ground-braking” paper was “Attention is all you need” (Transformer paper) and even that combines a lot of techniques of previous algorithms (in a “slight new” configuration).