Self-Certified Life Coaching

AI Policies for Print, Film, Broadcast, Podcasts and Everything In Between.

I don’t want to go back to Luddite times, and neither do you.  In fact, we can’t, and we especially can’t with regard to communications.  Each new electric communications technology, from telegraphy through telephony and onto and through wireless telegraphy, radio, television, satellite communications, our current computer age, and now Artificial Intelligence (AI) – all have brought calls that the end of the world is near, the machines will replace us, and we as simple ordinary folk just have have to deal with changing times. Please note that I didn’t even mention secondary threats like rock and roll, comic books, and pornography, whose very existence is due to communications technologies, All were touted as threats.  Hell, Plato even called for state control over the stories told to children to prevent them from copying the behavior the squabbles between Greek Gods. I might even agree with some calls for state control over some forms of media because of their effects on children, idiotic adults, and consequences for different groups of people.

While I don’t want you to respond with, “How big of you to do so, Rosenblatt,” I also don’t want to downplay the concerns of such folks, even though it’s pretty clear to me that the end of the world, whether through climate change or World Wars III, IV, V, and so on is the result of politics and economic systems.  Maybe the same’s clear for you.  Maybe not.

Many of my similarly-aged friends (I’m 75) are outraged that their once competent selves are reduced to having to mediate their commumications, and by mediate I mean use computers for transactions that their once competent selves could execute by a letter, a trip to a bank, a trip to a local City Hall or the Motor Vehicle Department or Health Department, and so on and so forth. And don’t get me started on how Zoom terrorizes some.  Equally important is a fear that their identity or savings or children or spouses or friends will be stolen (I really like the term scraped, but I use stolen here) and shipped off to North Korea, Russia, Iran or the Nigerian prince who for some inexplicable reason, no longer writes to me. It took me a VERY long time to get friends to at least ask me if a particular letter, Email, text, or phone call is legit before responding or clicking on something that I will have to spend time helping them undoing. 

To put the matter simply, and I can only speak from experience, here in the U.S., people are easily duped.  Whether or not the attribution of “There’s one born every minute,” goes to Phineas T. Barnum, there IS at least one born every minute.  My friends tell me stories.  My in-laws tell me stories.  Yet do they change their communication behavior?  Rarely, and instead most just create a brand new Facebook or Instagram profile.  People, right?  You can’t beat them with a stick, and you can’t change them because, as the comedian Ron White says, “You can’t fix stupid.”

A couple of years back, a savvy friend of mine and I were talking about Hannah Arendt and her book, Eichman in Jerusalem.  We joked that what we saw in AI generated essays, narratives, and poetry was as banal as the evil Arendt wrote about.  We agreed that we should hire ourselves out as AI Editors.  We would take AI generated works and edit them into less passive, less stilted, less poofy (where poofy means swollen), less self-important, less definitive … wait, I’m getting lost here because anyone who reads and writes can tell the difference between AI-generated pap and real writing.  At the same time, AI-generated pap can be used to teach poor writers how to improve their writing skills. But now there are AI tools that can humanize AI. That’s not the first time that technology made me obsolescent before I made money from my skills.

But let me get to my two main points, both of which I will bet you see coming.

Point 1.  Any AI policy should be broad enough to take into account future changes, and narrow enough to be easily put into effect. For example, in the New York Times of March 25, 2026, there’s a guest essay opinion piece, “The ‘Shy Girl’ Fiasco Shows Why Trust in Writers Is Plummeting,” by Andrea Bartz, who writes, “most readers want disclosure when A.I. has been used, and they are quick to note the telltale rhythms and patterns of popular large language models.”  Bartz also acknowledges that AI writing will improve, and that she (Bartz) submitted a sample of her writing to an AI detection tool and it came back with 82 percent certainty that her original writing was AI.  Bartz concluded that AI was mimicking her!

So I went to Grammarly’s AI detection tool (Why?  Grammarly’s is free) and submitted this essay up to the end of the preceding paragraph.  The result was an AI detector rating of zero percent.  I smiled for a second but then continued writing. I’m a busy guy with lots of things to do.

I don’t want to go into how machine learning works, primarily because I’m no expert.  You can go HERE if you want for a pretty clear explanation.

My point here is that readers and listeners are easily duped, and while AI can produce a yet undiscovered Shakespeare history, or a yet undiscovered Shakespeare tragedy, or a yet undiscovered Shakespeare comedy, that shouldn’t strike anyone as news.  You don’t need AI to create art forgeries.  You need humans, and as I said, humans are dumb.  And that’s why we all readers, all listeners, and all viewer need a policies.  But before I volunteer my thoughts on a policy, I did write TWO points.

Point 2.  All AI is not the same. Nor are its users. For example, I taught for 25 years on and off in the City University, some years full-time but most years I taught as an adjunct while working full-time as a broadcast tech.  Why?  Broadcasting paid more.  Chairs of the departments I worked for, and I worked primarily in broadcasting and communications departments, were always surprised when I brought in instances of plagiarism I found in student papers, both undergraduate and graduate, and this despite the fact that I spent at least two hours each semester in each class talking about original writing and what constitutes plagiarism. 

These were heady pre-Internet days, when a person like me couldn’t simply go to Lucy and Rickipedia to discover facts and easily check online whether a chunk of words come from a unattributed published source.  I used what I liked to call The Kominsky Method … check that, The Kominsky Method is a nice Netflix series starring Michael Douglas and my man, Alan Arkin … oh yeah, I used The Rosenblatt Method.  The first day of class in every section I taught I asked students to prepare a 500 to 700 word essay on the types of radio stations they listened to, the types of TV shows they watched, and the types of films they went to see.  “Impress me,” I asked, and I told the students the papers would not be graded, that the papers should be typed and double-spaced (remember typewriters?) and we would discuss the papers in class the following week.  “Impress me,” I reiterated.  My goal here was to have writing samples, and once I did, I had a general handle on how these students could write, and when I came across a string of sentences that could have appeared in Time, Newsweek, or U.S. News and World Report, I could spot them a mile away.  But I needed proof to charge a student with plagiarism, and that I’m not going to reveal how I got proof right now.  But I can give you a hint.  It involved going to a halfway decent college l-i-b-r-a-r-y. Ssshhhhhhhhh.

I tell you this story because while some can spot plagiarism, or AI, or even bullshit a mile away, and many cannot, but because many cannot, that’s no reason to banish AI.

If you want me to edit your academic work, just ask.  I may charge you, though. But I’m not AI.  AI can edit academic works as well.  My older daughter is a physician and researcher at a well-known hospital.  I write this not only because I like writing that “my older daughter is a physician and researcher at a well-known hospital,” but this daughter, who writes medical research studies and reports for online medical publications, and has been published in medical raining books, occasionally gets hornswoggled into editing the works of other researchers seeking publication, and when this daughter was in a crunch for time, she would hand off the studies written by physicians to me, and I can tell you that many of them couldn’t make their ideas clear if their lives depended on doing so.  So, I would tighten up the syntaxes employed, the punctuation used, the sentence structures created, the turns of phrases employed, and so on and so forth, to make these important papers comprehensible and readable.  All to APA standards. The APA is the American Pyschological Association, whose writing guide is one of a handful used for publication. My daughter would then take over to check the medical science.

In this scenario I, Mark Rosenblatt, was AI.  Unpaid as well, as I am for so many things I do.  If it weren’t for my legendary humility, I would complain far more often than I do.

If AI is used to correct the punctuation, the syntax, sentence structures, and so on and so forth to make a document readable, then so be it.  If AI is used to generate an entire story or narrative, then that’s no different than reading out loud from Wikipedia, and should be avoided in all professional and classroom settings.

An AI Policy

An AI policy should be aware of these distinctions I refer to.  An AI policy should consider, as Andrea Bartz wrote in the New York Times, that “most readers want disclosure when A.I. has been used.”  Most listeners and viewers do too. In commercial films, if patrons can sit through 3 minutes of corporate production logos, patrons can read a 15-second AI disclaimer. Or not.

But while I openly admit I like typewriters, I don’t want to go back.  I like computers and the luxury of correcting thought sand ideas and errors on the fly.  You do, too.  I also see that AI, like every preceding advance in communications technology, is not without risks.  But let’s NOT go Luddite and aim for the lowest common denominator in AI detection, and by that I mean banishing it entirely, because to quote Bob Dylan, “to live outside the law you must be honest,” and to paraphrase an expression whose origin is unknown, “if Artificial Intelligence is outlawed, only outlaws will then have Artificial Intelligence.”

By the way, I just submitted this entire essay to Phrasly.AI and it came back 100% human, to which I can only say, “Thank God”