Way late edit here, I agree with the disapproval of my statement. I was thinking about how LLM’s kind of work the same way. Designed by humans to make something humans can already do but thousands of times faster. However “revolutionary” was very much the wrong word choice.

  • SirSoy@lemmy.world
    link
    fedilink
    arrow-up
    29
    arrow-down
    3
    ·
    2 months ago

    I think you’re underselling the automated assembly line quite a bit and how truly groundbreaking it was llm is the progression of an existing technology. AAL was just something completely new.

  • Dark Arc@social.packetloss.gg
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    2 months ago

    Here’s the big difference. Automated assembly lines do a job better than the average human can. LLMs do the job consistently worse than the average human would.

  • hoshikarakitaridia@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    2 months ago

    This might get down voted but let me share a nuanced take.

    AI is either overhyped or underhyped.

    Yes, right now LLMs won’t change the world, don’t make great lawyers, don’t replace software devs and don’t write all of your emails. But if you used some of the more recent ones, they can definitely help you express or help to write quicker, and they can give you a bird’s eye view of a topic.

    And let’s also make clear that AIs are not useless nor is their potential exhausted. Right now they are useful helpers in specific scenarios and they only get more useful from here.

    There are important questions around: what constitutes a personality, a right to an image, or when does imitation become stealing, and how do you even consider an AI model on questions of copyright.

    I think the problem is that people have promised too much from this technology and that’s why everyone just associates it with bad results. But there’s more to it, and nuance gets lost in the stream of strong opinions.

    I like the comparison.

    The implantation is different, the effects will be different, and how we evolve with it will be different, but AI does already have a solid impact and it will continue to have one.

    And the industrialization was neither good nor bad. How some people fucked over poor people’s lifes in the process is despicable, but just because things get faster or more efficient is not inherently a bad thing.

    Now we definitely need rules here. Some shit people and companies do with AIs is wild and should be illegal, but as always law takes time. Maybe it’s an illusion but I hope for a healthy integration of AI in small ways into our life. And I really mean small. Give me chatgpt and AI spell checking, and maybe some code auto completion. Don’t put all those AI assistants into everything because that’s not the way to go. Change done right moves slow, and if we only had the things we know how to use, we’d be a lot better off rn.

    Just as automated assembly lines at some point led to electronic devices being more accessible, I hope the LLMs and other AIs we use will become well placed and non-intrusive.

  • halvar
    link
    fedilink
    arrow-up
    3
    ·
    2 months ago

    While right now that may not be the case, but as someone who looked into the topic I can definitely say that it has similar potential. It’s just that a lot of companies just duct-tape AI onto their product and the result they get is usually shitty. But if you ignore all those “projects” that are obviously meant to fail, there are some promising projects and applications for AI that are actually made by people who understand what they are doing, the limitations and the upsides of the technology and they may just make products that are indeed useful.

    Sorry for the undecipherable wall of text, it’s early in the morning for me.

  • nerobro@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    This is a very bad take. LLM’s, appear to be at their limit. They’re autocomplete and are only as good as their inputs. They can’t be depended on for truth. They can’t be trusted to even do math.

    LLM’s work as a place to bounce things off of, but still require editorial work afterword, even when they are working their best.

    LLM’s take huge amounts of power, both to make run, keep running, and to correct their output.

    In general LLM’s don’t significantly reduce labor, and they are still ~very costly~.

    Even the most basic assembly line multiplies someones output. The best assembly lines remove almost all human labor. Even bad assembly lines are wholesale better than individual assembly.

    As long as it’s LLM, I don’t believe it will ever be “useful”. We need a different technology to make this sort of assistance useful.