Introductory Blog Post!
My Name is Eric Callagan. I am a father to three amazing daughters, Scarlet, Olive and Brigid. I've been a creative since my very first colorfully decorated nappy. I've since gotten more civilized in my mediums and end results. Feel free to check out the rest of my website and critique away! I made this long ago with the start of my business and the most traffic it ever got was for scoring my custom beat saber competition! Jim Morrison once said "Whoever controls the media controls the mind" Can you feel me controlling your mind with my media yet? Maybe it's just working if you don't notice! Thanks!
-Eric Michael Callagan 09/15/2024
Artificial Incongruence!
Artificial Incongruence
By: Eric Michael Callagan
Artificial Intelligence has taken the world by storm the past few years, upending societal and professional norms. This technological revolution has us questioning our ethics as well as helping us to understand our priorities as people. I feel that there is a misconception of what “AI” actually is and what things it should be used for. I myself have made use of this new tool, sometimes to my delight and sometimes to my utter frustration. The large language models that these tools are based on are exceptionally good at providing results to a given prompt that seem plausible and even probable. Unfortunately, they can lack the nuance and sometimes factual information that we depend on. There exists this large gap between intelligible and useful information and pleasantly presented word vomit. Journalism is one profession of many that has become infatuated with, and in some cases undermined by the latest wave of AI models. Journalism isn’t a profession where simple word vomit should be acceptable in my opinion. Though, as I’ll point out later, it does have its place. My issues with Artificial Intelligence and its prevalence on social media are both profound. I wade through many posts on facebook and other social media sites which are clearly false or digitally produced. Just today I commented on someone’s post of an AI generated image of a little girl holding a puppy who was seemingly being rescued from the flooding in the wake of Hurricane Helene. The poster said basically that sure, it was a fake image but it didn’t change anything about the flooding being a bad thing. To an extent, I can see where they are coming from. A case could be made that the image would drum up more support and donations than something factual coming out of the flooded areas, and maybe it’s better to not have a reporter there taking photos when rescue operations are underway. I find this line of thinking to be pretty dangerous and a slippery slope. How long until an artificially generated post portrays violence somewhere in the world where none exists and it's used as an excuse to go to war. “Well sure, it’s not real but we gained new territory and eradicated terrorism in the region” sounds a lot like someone else’s “They used fake news to spark a genocide”. Journalists in these instances have a duty to the public to deliver the truth. In this new world of more and more artificially generated stories and photos, I think journalists actually NEED to use artificial intelligence to suss out the imposters. But then what use does AI serve if not only to create and then confirm and discredit its own creations? As pointed out in the article Journalism on Autopilot by Haistin Willis, we can see that there are plenty of benign and helpful uses for AI in journalism. In the article, local sports are used as a main argument for AI’s use in modern journalism. Afterall there are so many sports games and only so many reporters to cover all of these events. The reporter says that given the help of AI in covering these sporting events they can spend time going to the bigger games, getting better photos and more juicy quotes from participants. I think this also slightly misses the mark. The main problem may be with how we as a society value certain types of work. If they hired and paid better for more journalists, they could maybe cover as many events with real information and quotes. They can’t afford to do this though, as we consumers don’t typically value it enough to make it viable. My argument would be that rather than have computers spurn out this content that maybe it doesn’t need to exist in that form. In the article they say that a lot of the statistics and information about the game is generated from coaches, players and families and that information in some cases needs to be fact checked for authenticity before being published. A better system I think would be where the stats are collected by the officials overseeing the game and made available to the players and families in the future without the need to “publish” the stats in any official capacity in terms of a news network covering the event. In this modern era with social media and devices specifically designed for sharing large amounts of information we may not need these old institutions to be the only place where these statistics can be found. I may feel differently though when MY kids are in sports and I’m wanting their accomplishments to be published in a newspaper or more official media source. Though, I don’t know that I’d want my children’s accomplishments watered down to what an AI might spurn out quickly for the local paper. The big limitation with these language models is human creativity. The more AI generated content that is out there, the more that it is drawing information from AI generated sources. This loop will eventually lead to more and more hallucinations (false information provided as though it’s real) and watered down, repetitive reporting. Eventually they may dispense with the pleasantries altogether and just put out base stats then we’re back to having it be handled by coaches and teams rather than a news outlet. I also think that AI lacks the understanding, critical thinking and self-reflection required to accurately and fairly report the news. A reporter can discuss with people who saw an event or experienced it, where an AI is limited to information it can glean (or hallucinate) from online sources about an event. Even if an AI could speak to a person and use that interview to form a story (which is technically possible) the machine would lack the awareness and personal connection to ask questions and elicit answers that we as viewers or readers might find interesting or compelling. I have dealt with the frustrations of AI’s lack of true understanding and knowledge. I have created a couple of Virtual Reality experiences using Unity and the Quest 2 and 3 VR headsets. When I ran into roadblocks in my own understanding of the programming I turned to Chat GPT. I often would get seemingly good advice that would simply not work or were laughably silly solutions. After much prompt engineering and head beating against the desk I would sometimes get usable code for these games. Most notably my Axes in my Paul Bunyan Axe throw game would not spin properly when thrown. After quite a long time of prompting and changing the prompts I finally got the axes to spin correctly. I don’t know that I saved much time versus just researching the problem myself and creating a script to rotate the axes correctly. I did however find it useful to “discuss” the topic with the AI. As Rob Torn in the second article: “Newsrooms Should Carefully Consider AI” AI can be exceptionally good as a brainstorming tool. I agree, I have in my freetime been attempting to write a scifi novel. While I would never have the AI write any of the book, I have many times found it helpful to bounce ideas off of the AI especially when it comes to topics where I may not have as much of a working knowledge OR when the AI can grant me a perspective which I hadn’t yet considered. The trick lies in HOW we use these new tools and not whether we use them. We’re all trying to keep up with the Jones’ after all.
-Eric Michael Callagan 10/06/2024