I think if you want to remove the guard rails you gotta go local. Not that it’s as fast or as good but it’s not creatively stymied. It’s not straight forward and constantly changing though. I was following it for a while until it exploded like a fractal of possibilities. Honestly not sure where it’s at right now but its better every time I take another look at it.
Localllama is a small community now but there is a Reddit one too
Llama is metas ai model that is open sourced the community took it and it keeps improving.
Look at or oobabooga which is a gui to run the models
You need a gaming computer in terms of specs or an m1 apple chip
There are also services that can run it for you on the cloud but privacy isn’t as strong as a completely offline setup.
I use it to run scenarios when it goes off rails that chatgpt doesn’t want to continue out to get some ideas. It’s not as good but it has its own fun factor.
I think if you want to remove the guard rails you gotta go local. Not that it’s as fast or as good but it’s not creatively stymied. It’s not straight forward and constantly changing though. I was following it for a while until it exploded like a fractal of possibilities. Honestly not sure where it’s at right now but its better every time I take another look at it.
What do you mean by “go local”?
Localllama is a small community now but there is a Reddit one too
Llama is metas ai model that is open sourced the community took it and it keeps improving.
Look at or oobabooga which is a gui to run the models
You need a gaming computer in terms of specs or an m1 apple chip
There are also services that can run it for you on the cloud but privacy isn’t as strong as a completely offline setup.
I use it to run scenarios when it goes off rails that chatgpt doesn’t want to continue out to get some ideas. It’s not as good but it has its own fun factor.