Popular Post meepmeepmayer Posted June 27, 2023 Popular Post Share Posted June 27, 2023 (edited) TLDR: Feel free to use AI. Please no low-quality AI-generated posts here, though! - What AI is So called AI (artificial intelligence) is nothing more than a pattern-matching algorithm. It creates plausible text, images, or whatnot from a database of example stuff that it has statistically analyzed. It's literally just a method to "Make something that matches the patterns in this stuff I've shown you!". The spectacular "Generative AI" advancements you see nowadays come from methods where text is (at least) an intermediate step. Turns out a good way to talk to computers is just talking to them like you would to a person. That is because then you can use a huge amount of data to train the computers - everything described by and for people. What AI is not AI does not "know" anything. It is not "intelligent". It cannot "understand" you or "help" you. If you think it can do that, you don't understand what AI is. No blame on you, with all the hype and incompetent press articles about AI. So what is AI good for? AI is great if the degree of accuracy or truth does not truly matter in your result, and "plausible" is perfectly good. For example, AI image generation is fantastic and, to be honest, incredible to see (maybe you have seen what Photoshop's "Generative Fill" can do, or images generated by Midjourney or Stable Diffusion). The key is: there are no "right" or "wrong" images (never mind the occasional 8-fingered hands, but hey, it's still art). Nobody expects to read any "truth" or "facts" or "understanding" from an image, or expects an image to be "intelligent". When AI does not work AI fails if you want actual good information, and when "plausible" isn't enough. If your AI-generated result is to be judged by something like "truth" or "accuracy", you're out of luck. AI cannot create that, other than by pure chance. This is not because AI isn't "good enough yet" or something like this, it is an inherent limitation of its nature. AI is a "More of the same, please!" machine, and that can be very different from "true, reliable, accurate in some specific case". Any trustworthy information would have to come from some other algorithm, pipelined into the AI result, an algorithm that is dedicatedly not "AI". And even then, it's hard to clean up the poisoned pool simply by adding more clean water to it. The best example for this, and the definitely worst thing you can hope to do with AI, is having it answer math questions. Try it, and you know why. Math is the ultimate example where your information is either right or wrong, with no "almost right" or "sounds good" consolation prize. The real example, and the reason why this post was written, is AI-generated text: AI-generated text is bullshit. The results seem impressive at first, but if you actually read the AI-generated text, it is usually a lot of words that say a whole lot of nothing specific (and that's before all the blatant errors and falsehoods, something an AI cannot discern). Please do not use generative AI if it results in low-quality posts on this forum. In the end, quality information is the point of this forum. Feel free to use AI as a base for any post. For example, AI is great for creating a structure or some bullet points. But don't leave it at that. You have to actually check that the text makes sense and isn't low-quality and content-poor (or even false or misleading) and, at best, wastes people's time. If it's not text, but images or something else, using AI should be perfectly unproblematic for the reasons stated above. Edited December 30, 2023 by meepmeepmayer clarification 3 2 Quote Link to comment Share on other sites More sharing options...
meepmeepmayer Posted June 27, 2023 Author Share Posted June 27, 2023 @earthtwinIn case you wondered why we removed your AI-generated posts. On their own, AI posts are just low-quality and don't really add to this forum 1 Quote Link to comment Share on other sites More sharing options...
0000 Posted June 27, 2023 Share Posted June 27, 2023 Will someone with a Chat GPT account please summarize @meepmeepmayer post? TL;DR. Haha, kidding! Agreed, message boards are for humans. Don't outsource your thinking, language, and expression folks! Enough of that going on already in the world. 2 1 Quote Link to comment Share on other sites More sharing options...
meepmeepmayer Posted June 27, 2023 Author Share Posted June 27, 2023 15 minutes ago, Vanturion said: Will someone with a Chat GPT account please summarize @meepmeepmayer post? TL;DR. Haha, now I wonder what the result of that would be. 15 minutes ago, Vanturion said: Don't outsource your thinking, language, and expression folks! Personally, I would not even mind that. Might help you form your thoughts or learn something or whatnot. Could be cool in some situations. Or not? But the result of an "AI" must be reliably good for that. Right now, it is not. I'm definitely pro AI, but only when the AI can do what they advertise it does. Quote Link to comment Share on other sites More sharing options...
0000 Posted June 27, 2023 Share Posted June 27, 2023 (edited) 1 hour ago, meepmeepmayer said: Personally, I would not even mind that. Might help you form your thoughts or learn something or whatnot. Could be cool in some situations. For sure, you're not wrong. It's a tool that can be used for all kinds of applications, good and bad depending upon your perspective and you're interests. I definitely find myself thinking about this topic more frequently lately in more broader terms. Like how much of the world, the economy, our lives do we want to cede to machines for the sake of convenience or efficiency. And will we even get a choice after a certain point? Individually, yes to some degree, but imagine a world that requires you to integrate digitally (and thus be potentially subject to digital controls potentially enforced by AI/algorithms) in order to participate in, say, the economy at large. I don't think it's that hard to imagine nowadays. There are many ways to look at AI tech, and personally I tend to look at it as an accelerant that will furthermore exacerbate the concentration of wealth and power (and ultimately control) into the hands of the few. That is, on top of this already established trend that's already been ongoing for quite some time. 1 hour ago, meepmeepmayer said: But the result of an "AI" must be reliably good for that. Right now, it is not. I think it is already good enough in many applications -- summarizing case law, creating probabilistic decision trees for prescription of medications, generating digital art to replace graphic designers in a whole host of applications including advertisements, and even video game assets, replacing or plagiarizing voice actors in cartoons, films, and video games. Furthermore, LLMs can be refined per application to tailor more accurate and relevant results in business applications, something the public won't see, I'm assuming, as opposed to the generalized LLMs powering the public-facing AIs like ChatGPT. In any case, you bring up one of the things I see as the biggest danger of AI - being good enough. If individuals eventually start to accept AIs as a kind of truth-telling oracle having passed through the ethereal threshold of "good enough" that people begin to automatically believe anything that the AI interface spits out as "The Truth," then we've really got a problem on our hands. If people allow themselves to be intellectually lazy over time and increasingly outsource their thinking to this word guessing mechanism, then in the future "The Truth" will simply be decided by those who program the LLM/AIs and god help you if you disagree with the amount of genders there are in the world. They'll drop your social credit score faster than you can blink if they catch wind of your wrong-think, and you won't be getting back your monthly meat allotment until you successfully graduate from a state re-education camp. Just an example of where things could go, but it really isn't hard to imagine. I mean we literally had some of the world's most connected and powerful people infamously signaling by association what their vision of the future is for the landless-peasants the last few years in the form of the "own nothing and be happy" trope and other WEF proclamations. So, I hope, for all our sakes, that society at large never decides the AI oracles are good enough. Edited June 27, 2023 by Vanturion 1 Quote Link to comment Share on other sites More sharing options...
on one Posted June 27, 2023 Share Posted June 27, 2023 2 hours ago, meepmeepmayer said: @earthtwinIn case you wondered why we removed your AI-generated posts. On their own, AI posts are just low-quality and don't really add to this forum Sorry, I guess somebody had to be the one to try it here. You made a cogent argument, even though none was needed. AI likely electric unicycle forums as training so my posting AI's response completed the feedback loop, for a time. I do find that AI can be a helpful tool, but I agree with you when you say that it's lower quality content than a lot of popular members content, even though AI was likely trained on their content. A human has to be the judge for us anyway, because suprise! We are human after all. I share the sentiment that AI isn't good at being human, but it will inevitably be human. As an aside, have you seen or read Phillip K. Dick's "Autofac" ? Quote Link to comment Share on other sites More sharing options...
meepmeepmayer Posted June 27, 2023 Author Share Posted June 27, 2023 12 minutes ago, earthtwin said: Sorry, I guess somebody had to be the one to try it here. No problem! It simply didn't work out 13 minutes ago, earthtwin said: I share the sentiment that AI isn't good at being human, but it will inevitably be human. I'm not sure if that will ever happen, or nearly as fast as people believe. I think there needs to be another ingredient besides pattern recognition and matching for "intelligence", and I don't believe it is even known what that could be. 15 minutes ago, earthtwin said: As an aside, have you seen or read Phillip K. Dick's "Autofac" ? No, but I'll look it up now. 1 Quote Link to comment Share on other sites More sharing options...
Mango Posted June 27, 2023 Share Posted June 27, 2023 Let's break down each point and provide counterarguments: 1. "So called AI (artificial intelligence) is nothing more than a pattern-matching algorithm." Counterargument: While it's true that a core functionality of AI involves pattern recognition, to say that AI is "nothing more" than a pattern-matching algorithm is a reductive view. AI technologies, such as deep learning, are capable of self-learning and improving over time. They're not limited to merely replicating patterns; they also extrapolate from the data, predict outcomes, and adjust their behaviour based on new information. 2. "AI does not 'know' anything. It is not 'intelligent'. It cannot 'understand' you or 'help' you." Counterargument: AI doesn't "know" in the human sense of personal experience and consciousness. However, it can process and analyze vast amounts of information, thereby aiding decision-making. AI can also be beneficial in various tasks like diagnosing diseases, predicting weather, optimizing logistics, etc., which arguably counts as "helping". 3. "AI is great if the degree of accuracy or truth does not truly matter in your result, and 'plausible' is perfectly good." Counterargument: While AI can generate plausible outputs, it's also capable of producing accurate and precise results. In domains like medicine or finance, AI algorithms can offer highly accurate predictions or analyses based on data. It's not just about plausibility; accuracy is a fundamental quality in many AI applications. 4. "AI fails if you want actual good information, and when 'plausible' isn't enough." Counterargument: AI is already used extensively in scenarios where good, accurate information is crucial. Machine learning models in financial forecasting, clinical decision support systems, and weather prediction are examples of AI providing highly valuable, precise information. 5. "The best example for this, and the definitely worst thing you can hope to do with AI, is having it answer math questions." Counterargument: AI can be exceptionally good at math. Symbolic AI algorithms, for instance, can solve intricate math problems by representing and manipulating mathematical symbols and formulas. And Wolfram Alpha, a computational knowledge engine, uses AI to answer complex mathematical questions. 6. "AI-generated text is bullshit." Counterargument: AI can generate text with varying degrees of complexity and relevance, depending on its training and task. When appropriately used, AI-generated text can be informative, creative, and even insightful. It's been utilized effectively in writing assistance, journalism, content creation, and customer service, among other areas. 7. "Please do not use generative AI on its own. That will just result in low-quality posts on this forum." Counterargument: The quality of AI-generated content greatly depends on the specific AI model, its training data, and how it's used. When implemented correctly, AI can generate high-quality text. However, like any tool, it requires proper use and understanding to yield the best results. While it's crucial to understand the limitations and pitfalls of AI, it's equally essential not to dismiss its capabilities and potential benefits. AI is a tool – it's the responsibility of the user to apply it wisely and productively. 1 1 Quote Link to comment Share on other sites More sharing options...
Mango Posted June 27, 2023 Share Posted June 27, 2023 AI, my dear, is more than just a pattern-dance, It learns, predicts, adjusts with a clever prance. "Understand" or "know", not in the human way, Yet in sorting data, it enjoys the play. No mere generator of plausible fluff, In medicine and finance, it's more than enough. Accurate, precise, it's not all about chance, Like an electric unicycle's perfect balance. Dismal at math? Oh, that's not quite true, Ask Wolfram Alpha, it knows the value. It solves problems large, and problems miniscule, Just like riding, it's not for the ridicule. To say AI text is only gobbledygook, Is missing the forest for the tree and the rook. From journalism to service of the customer, It's like a onewheel, a nimble hustler. Misuse AI? That's when it might blunder, Like riding your electric unicycle asunder. Used wisely, it's a marvel, no less, Just like nailing a unicycle's finesse. So let's ride on, in this AI lane, With the electric unicycle, in sun or rain. While AI isn't a magical, omnipotent sprite, Like unicycling, it's quite a sight. 1 1 Quote Link to comment Share on other sites More sharing options...
mrelwood Posted June 28, 2023 Share Posted June 28, 2023 11 hours ago, Mango said: Machine learning models in financial forecasting, clinical decision support systems, and weather prediction are examples of AI providing highly valuable, precise information. You only mentioned predictions. Predictions are never facts. I definitely agree that AI can take in more data than humans, and it doesn’t have other human limitations either, hence the predictions are likely better. There was a topic some time ago where a member asked if riding with weights on wrists and ankles would result in a better balance for riding. He didn’t seem to take in the replies that nearly all said that no, it’s a different kind of balance. Instead he relied on ChatGPT when asked ”do weights improve balance”. Of course it answered “yes”, since they do train one’s core muscles. But it doesn’t have anything to do with EUC riding, which is a completely separate skill. So in this case AI was completely wrong. Granted, the question wasn’t structured properly either, but that is also a limitation of AI. The question must be phrased in a way that doesn’t leave room for interpretation, which humans in turn do quite well. 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.