This is very odd. It seems to me it would be easier to take a real picture of a walmart and then photoshop in the “Investing in American Jobs” signs (which, by the way, there are too many of and they’re too large to be realistic). But instead we get this ai garbage. Why? Why not just use a real picture? Why have an ai generate an obviously fake walmart checkout aisle, when real pictures of the same are so, so easy to come by?
I’m definitely heading this direction, personally. The guy who got me into linux a bajillion years ago is a crotchety old guy who (in the mid 2000’s) was of the opinion that if a website didn’t work on his text-based browser (I don’t remember for sure if it was Lynx, may have been something else) then that website wasn’t worth visiting. At the time I thought that was pretty over-the-top, a bit too extreme for me, but the older I’ve gotten the more I think he was onto something. I think I’ll mess around with Lynx this weekend and see what it’s like. Sure, you can’t visit most of the internet that way, but hell, at this point that seems like a plus to me, gotta say.