This blog is being written by ChatGPT. Not really. I had planned to download the app and ask it to write a short essay on the dangers of AI-generated information but, when I went to the site, I was informed that the platform had reached its limit of users. It gave me the option of leaving my email so I could be contacted regarding future availability subsequent to beefing up whatever you beef up to accommodate more lazy, illiterate slackers…I did not.
I don’t know what the current “limit of users” amounts to, (many, perhaps many, many millions I’m guessing), but this months-old company and what it portends is the best and worst of human accomplishment. The best, of course, would be a lighting-speed research tool that would scan, parse, organize and regurgitate data on any given subject…on demand. Wow! Think of precedent research for lawyers, symptomatic confluence in medical diagnoses and the skewing, charting and graphing of repetitive information on just about everything. Mind-blowing…right? Well, yeah, until some wayward shred of Wiki-data or dastardly insinuated misinformation steers the search to an unreliable conclusion. That would be minimally bad…maybe the worst. I’m told, by acquaintances with considerably more knowledge of AI, that there is an ethical commitment, embraced by most technology leaders, that attempts, in every beta version, to install algorithmic oversight that scrubs the most obviously unvetted data and tags iffier data by degrees of iffiness. That’s not very comforting when purposeful information manipulators have the same tools and are as smart or smarter than the boy scouts. The abyss that we seem so eager to jump into is fraught with foretold fire and brimstone and irreconcilable, self-inflicted pain. Please, all of y’all who’re using this and similar software to write your poetry, research papers, ad campaigns, break-up texts and blog posts, abandon ship before it’s too late and get back to doing these things the old-fashioned way…by copying off of your neighbor.