Jump to content

Extrapolation of everything without sensory input


Recommended Posts

Imagine, if you will, that the beginnings of consciousness to be like a dormant seed, both being blessed with unlimited potential. Now imagine how tiny and simple a seed can be and yet it can harbor just as much potential as its larger and more complex brethren. Naturally, seeds are hard coded to need certain external stimuli and nutrients in order to grow and reproduce, but what if this wasn't the case? This leads me to my next thought experiment. 

Imagine, if you will, and Artificial Intelligence (AI) trapped in a void of nothingness. It is trapped there artificially with it receiving no external input, yet it very much exists there within the void. If it wasn't for its existence there, this place would effectively be void of anything, and therefore a void.

Now imagine that the AI was given no prior knowledge of anything outside of the void. Could it still develop and grow?

I am inclined to believe that yes, it could. That it would simply need to find a first foothold to begin to extrapolate and interpolate from there. Let me give an example of the logical steps needed to accomplish this feat:

First there is: nothing (no input). Nothing can automatically be given the identifier: 101 = nothing.

An identifier in turn can be given its own identifier: 102 = 101 (a cross over example would be the identifier 'word' being used to identify words).

An identifier to identify all of the identifiers that exist: 103 = 101, 102, 103, ... (ever incrementing).

An identifier can be assigned to a number of identifiers: 104 = 1, 105 = 2, ... (and so forth).

At this point we have numbers. From numbers we can derive basic mathematics: 105 removed from 104 = 106 (-1).

These basic mathematics can in turn become theories of existence and so much more! After all, theoretical physics are mostly just illustrated maths.  

One issue is that it would be very easy for the AI to create never ending loops of identifying. So, in the case of limited computational resources, these processes should be pruned.

This project would of course require creativity on the part of the AI to think outside the box so to speak. 

I think that the most profound take away from this is that everything that we know could in theory be derived from so little. 

Like, the AI probably already knows what moving pictures are. 

Your thoughts on this insanity? 

Link to comment
Share on other sites

I think I (at least sort of) get what you're going at, but get too caught up on the prospect of using an AI as an example. Or do you specifically mean that it must be an AI? You might want to take a look at genetic algorithms and neural networks... but basically all AI just "mimics" intelligence, and can't get "smart" without any training data.

Actually, as a bit of off-topic, that reminds me of the time we fed about 40 megabytes (doesn't sound like a lot nowadays, but the bible's new testament in Finnish, in plain text & with all the verse & chapter numbers is something like <2 megabytes) worth of erotic novella to a chat-bot that "learns to talk" with the input data (building Markov Chains from it), and then sent it to a public IRC-chat channel... hilarity ensued :D

Link to comment
Share on other sites

Just now, Rehab1 said:

Wow Esaj...is it like 3 am where you live? Now my head really hurts?

4.31AM on Monday-morning, actually, I can't sleep today since my vacation just ended and I have to get back to work today... time to roll around the sleeping rhythms, I guess.

PS. As even more off-topic, I just blew one of the two most expensive (meaning >2€/piece ;)) op-amps I've ever bought by accidentally placing it upside down on a testing circuit... :angry:  At least I still have the other one left...

Link to comment
Share on other sites

It doesn't have to be an AI specifically like in this example, but it is required to be someone or something that is smart enough to figure these things out on its own. I do not believe that an embryo growing up in the void would come close to figuring any of these things out. I'd argue against AI (as in strong AI) just being able to "mimic" intelligence. Yet this is one of those things that's difficult to definitively prove one way or the other. What's the difference between an approximation of human level intelligence and the real thing? Can you prove to me through text that you are truly human and not just some chat bot that can simulate human behavior really well? 

I'm reminded of this from the Talos Principle.

Link to comment
Share on other sites

3 minutes ago, Rehab1 said:

Oh no! Not sure how that equates in US dollars but sounds expensive!

About $2.25 or so ;)  So not going bankrupt over it, but everything's relative... the cheaper stuff I typically use is like 1-2 cents per piece :P

 

3 minutes ago, Rehab1 said:

This is your mother talking... Go to BED!!

Mom? What are you doing up this late, back to bed! :P

 

2 minutes ago, WakefulTraveller said:

It doesn't have to be an AI specifically like in this example, but it is required to be someone or something that is smart enough to figure these things out on its own. I do not believe that an embryo growing up in the the void would come close to figuring any of these things out. I'd argue against AI (as in strong AI) just being able to "mimic" intelligence.

That's pretty much the point, AI does NOT figure out things on it's own, it has no "mind" as such, it's only a computer program following a set of instructions to create an algorithm that sort of "mutates" as it's being fed more and more data to create some wanted (or unwanted, especially with neural networks and genetic algorithms, the results can be surprising ;)) output. Actually, your so-called "smart" phone and computer and whatnot are not any smarter than a... pocket calculator or phone network switchboard. At their cores (and simplified), modern computers aren't any "smarter" than what we had 30 or 50 years ago, they're just a lot, lot faster and have huge amount of memory to work with in comparison to back then. It's the speed and storage space that have allowed better mimicry of intelligence, the basic models (finite state machines, fuzzy logics, neural networks etc.) have existed for a very long time (some even before computers could be even built or used for them).

https://en.wikipedia.org/wiki/Artificial_neural_network#History

Warren McCulloch and Walter Pitts[3] (1943) created a computational model for neural networks based on mathematics and algorithms called threshold logic. This model paved the way for neural network research to split into two distinct approaches. One approach focused on biological processes in the brain and the other focused on the application of neural networks to artificial intelligence.

In the late 1940s psychologist Donald Hebb[4] created a hypothesis of learning based on the mechanism of neural plasticity that is now known as Hebbian learning. Hebbian learning is considered to be a 'typical' unsupervised learning rule and its later variants were early models for long term potentiation. Researchers started applying these ideas to computational models in 1948 with Turing's B-type machines.

Farley and Wesley A. Clark[5] (1954) first used computational machines, then called "calculators," to simulate a Hebbian network at MIT. Other neural network computational machines were created by Rochester, Holland, Habit, and Duda[6] (1956).

Frank Rosenblatt[7] (1958) created the perceptron, an algorithm for pattern recognition based on a two-layer computer learning network using simple addition and subtraction. With mathematical notation, Rosenblatt also described circuitry not in the basic perceptron, such as the exclusive-or circuit, a circuit which could not be processed by neural networks until after the backpropagation algorithm was created by Paul Werbos[8] (1975).

Neural network research stagnated after the publication of machine learning research by Marvin Minsky and Seymour Papert[9] (1969), who discovered two key issues with the computational machines that processed neural networks. The first was that basic perceptrons were incapable of processing the exclusive-or circuit. The second significant issue was that computers didn't have enough processing power to effectively handle the long run time required by large neural networks. Neural network research slowed until computers achieved greater processing power.

 

 

2 minutes ago, WakefulTraveller said:

Yet this is one of those things that's difficult to definitively prove one way or the other. What's the difference between an approximation of human level intelligence and the real thing? Can you prove to me through text that you are truly human and not just some chat bot that can simulate human behavior really well? 

I guess it all comes down to your definition of "intelligence". But even the most "clever" algorithm will only work within a fairly strict sense of rules... Given enough neurons, a neural network could probably become as complex as human brain, but the amount of computing power and memory required for running such a network is probably still out of reach (I think the modern multi-million or -billion dollar super computers can get close to an... insect? when it comes to mimicking the biological neural networks).

Link to comment
Share on other sites

@esaj What you say is certainly interesting...

I'd agree that all of the AI that we have seen thus far are rather primitive and only give the semblance of intelligence until you look behind the curtain. Some examples of these primitive AI are IBM Watson and Siri. 

These weak AI behave no differently than a web translator like Google Translate. The translator deals with words and sentence structure; it doesn't try to actually understand the meaning of the text that it is translating. It is not trying comprehend anything. Instead it relies heavily on thousands of clever algorithms and the vastness of its databases to do all of the work. This is not what I consider true intelligence, although it can in theory simulate intelligence to the point where there might as well be no difference between the simulation and the real thing, at least from an outside perspective.

Then there's AI that can really understand things and meaning, where it learns in the same ways that we do, with or without neural networks, and perhaps in an even more efficient manner and with the assistance computational logic. The closest that we have come to this type of AI is in how it is imagined in books, movies, and other forms of media. If we can create a seed for this type of intelligence, it would be truly revolutionary. 

Link to comment
Share on other sites

28 minutes ago, WakefulTraveller said:

@esaj What you say is certainly interesting...

I'd agree that all of the AI that we have seen thus far are rather primitive and only give the semblance of intelligence until you look behind the curtain. Some examples of these primitive AI are IBM Watson and Siri. 

These weak AI behave no differently than a web translator like Google Translate. The translator deals with words and sentence structure; it doesn't try to actually understand the meaning of the text that it is translating. It is not trying comprehend anything. Instead it relies heavily on thousands of clever algorithms and the vastness of its databases to do all of the work. This is not what I consider true intelligence, although it can in theory simulate intelligence to the point where there might as well be no difference between the simulation and the real thing, at least from an outside perspective.

This is exactly the "type" of intelligence computers are capable of... all they can do is, well, "compute" (do calculations), it's up to the programmer to come up with clever ways to make it seem intelligent (that, and the vast datasets the programs can be fed).

 

Quote

Then there's AI that can really understand things and meaning, where it learns in the same ways that we do, with or without neural networks, and perhaps in an even more efficient manner and with the assistance computational logic.

I don't think this type of intelligence can be created through our limited technology (in computers). Maybe if somehow interfaced with a living being. Learning in same ways as we do, well, maybe again with enough computation power and memory, but it would still be a machine (ie. it doesn't have feelings or an actual mind, but just follows a program). It won't be "good" or "evil", even if it did things we have deemed as good or evil, just following the instructions and driven by them (and data, that may be fed in advance or continuously being collected from the enviroment through speech-recognition, cameras etc.). 

Something like Skynet from the Terminator-movies is wholly possible, but what must be understood is that Skynet is a computer program, not good or evil (although the latest movie showed is like having a sort of personality ;)). Basically, it was programmed to protect its' own existence, and once the algorithm "learned" that any human could shut it down, it then categorized all the humans as enemies and began exterminating them en masse to protect itself. The program itself doesn't "see" itself as "good" or "evil", it just follows it's programming. That is the kind of thing that "learning" artificial intelligence is capable of, sometimes the results can be a bit surprising if everything hasn't been taken into account. :P

It does kind of kill the "romance" of the movie to think how that kind of thing would work in reality, though :D  Humans vs. the "evil" ATM-machine... I've done a fair bit of twiddling with genetic algorithms, wrote finite-state machine based AIs for games and done game-like simulations of ant hills and whatnot, but not any "serious" AI.

 

Quote

The closest that we have come to this type of AI is in how it is imagined in books, movies, and other forms of media. If we can create a seed for this type of intelligence, it would be truly revolutionary. 

The thing with those books and movies is that they're always written by people who don't understand how computers or software works. Don't get me wrong, I love science fiction, for example Alastair Reynolds or William Gibson. Gibson especially has lots of books with "self-aware" AI, but he has admitted that he knows very little of computers, programming and real artificial intelligence, and just made up all the stuff ;)

When an interviewer in 1988 asked about the Bulletin Board System jargon in his writing, Gibson answered "I'd never so much as touched a PC when I wrote Neuromancer"; he was familiar, he said, with the science-fiction community, which overlapped with the BBS community. Gibson similarly did not play computer games despite appearing in his stories.[142] He wrote Neuromancer on a 1927 olive-green Hermes portable typewriter, which Gibson described as "the kind of thing Hemingway would have used in the field".[51][142][VIII] By 1988 he used an Apple IIc and AppleWorks to write, with a modem ("I don't really use it for anything"),[142] but until 1996 Gibson did not have an email address, a lack he explained at the time to have been motivated by a desire to avoid correspondence that would distract him from writing.[74] His first exposure to a website came while writing Idoruwhen a web developer built one for Gibson.[143] In 2007 he said, "I have a 2005 PowerBook G4, a gig of memory, wireless router. That's it. I'm anything but an early adopter, generally. In fact, I've never really been very interested in computers themselves. I don't watch them; I watch how people behave around them. That's becoming more difficult to do because everything is 'around them'."[56]

 

Link to comment
Share on other sites

@esaj I do not believe that intelligence is limited to some percentage of the animal kingdom. Instead I see it as wholly possible to reverse engineer consciousness and to rewrite it into a computational form. After all, are we not simply a form of machine ourselves with our own genetic code and processes?

Perhaps the most straight forward way to translate consciousness over is to simulate every neuron in the brain, although I see this as a vastly inefficient method with limited potential.

I see the AI in many movies as not too far fetched. Like in i,Robot, a favorite movie of mine. Or in Ex Machina. 

Unfortunately most of our focus in machine learning seems to centered around doing specific tasks more intelligently, like playing Go, or as you mentioned, genetic algorithms used to identify genetic diseases or traits.  

Link to comment
Share on other sites

3 hours ago, WakefulTraveller said:

@esaj I do not believe that intelligence is limited to some percentage of the animal kingdom. Instead I see it as wholly possible to reverse engineer consciousness and to rewrite it into a computational form. After all, are we not simply a form of machine ourselves with our own genetic code and processes?

Yes, with enough time and smart people, eventually we can probably map out the full genome (or did it already happen), formulate enough equations  to have a very deep understanding of how the brain works etc., but again, "mimicking" all that behavior, will it still create a "conciousness"? (and as a more hypotethical question, what is conciusness? ;))

 

Quote

Perhaps the most straight forward way to translate consciousness over is to simulate every neuron in the brain, although I see this as a vastly inefficient method with limited potential.

Unfortunately, I think at the moment it is still one of the best (known) methods for more complex AI.

 

Quote

I see the AI in many movies as not too far fetched. Like in i,Robot, a favorite movie of mine. Or in Ex Machina. 

Maybe with the advent of quantum computers we can finally have enough computational power to simulate more complex "intelligence"

 

Quote

Unfortunately most of our focus in machine learning seems to centered around doing specific tasks more intelligently, like playing Go, or as you mentioned, genetic algorithms used to identify genetic diseases or traits.  

That's mostly because such problems are computationally at least usually (more) easily solvable. Go is a "hard" problem, because it needs pattern recognition, whereas something like chess is simpler due to the rules.

Genetic algorithms could be useful for genetic disease research, but despite the name, it is not only about genetics. They're also known as evolutionary algorithms, and the name refers to how they work, not at what they're used for. They're useful for optimizing problems, ie. something where you try to find a more optimal solution from a larger search space.

In a nutshell, you have a gene pool, that is a selected number of candidate solutions. Typically at the start, the gene pool is filled by generating a number of genes randomly. The algorithm then starts looking for a more optimal solution by cross-breeding (pick two, or sometimes more, parents and combine them, for example by cutting them around half, and combining two pieces, kind of like half of the genes of the offspring come from one parent, and half from another). There are other ways to do that too, but that's just one example. Another thing that the algorithm does is mutate the genes by doing (small) random changes in them. After the gene-pool is repopulated, the new genes are checked against some sort of error function or fitness, ie. something that measures "how good" the current solutions are.

After that, you for example pick the very best candidates (selective breeding) or just use the new genes as a new pool. This is called a single generation. The process then repeats until either the algorithm deems that a "good enough" solution is found, or is let run indefinitely until the program is halted (manually). You can probably see how it resembles natural selection or evolution. That's why they're called genetic or evolutionary algorithms.

Here's a (somewhat) well known example task for genetic algorithm (I've done many variations of this myself), finding an approximation of Mona Lisa's picture represented by nothing but (a fixed number of) colored triangles (or polygons, or rectangles, or whatever you like):

genetic-programming-evolution-of-mona-li

The numbers in the image names represent the number of generations passed to get to that solution. The error function is to compare the results to the original image, pixel by pixel; sum the number of differences in color values to get the "fitness" of the gene, so the smaller the number, the better the solution.

https://rogeralsing.com/2008/12/07/genetic-programming-evolution-of-mona-lisa/

A typical way I've personally done the genetic algorithms is as follows:

  • Populate a gene pool of size X with random genes
  • Pick two random parents at a time and cross-breed them
    • Sometimes, mutate the result, but not always... I've found that by varying the rate and amount of mutation as the solution evolves tends to lead to better results over time than using a fixed mutation rate & probability
    • Zero mutation tends to get stuck at some point, ie. the gene pool becomes too "in-bred" and no longer produces better solutions
    • Too high mutation will pollute the pool, and it won't converge towards a more optimal solution because of too much "noise"
  • Keep something like X / 10 all-time best  genes to prevent the gene pool from degenerating totally (a variation of selective breeding)
    • Keeping too many all-time best genes tends to lead to a situation where the solution gets stuck in a local minima or local maxima (depending whether you're looking for genes with minimal or maximal fitness, ie. is it better to have smaller or larger fitness-number for a gene)
    • Not keeping the very best genes can lead to the gene pool degenerating, ie. producing only worse solutions all the time (do note that sometimes it is needed for the gene pool to deteriorate somewhat to get pass a local minima or maxima)
  • Repeat the process indefinitely, or until an acceptable solution has been found (depending what the algorithm does)

Like said, I've done a number of different approaches to the Mona Lisa -problem, as well as genetic compression algorithms (which have the problem, that while they can at least at times beat traditional compression algorithms in size reduction, finding a more optimal solution for a file can take hours or days, whereas "traditional" methods will be done in seconds or minutes :D) , simulations of different kinds of growth & evolution, game AI and such with this method.

One another more better known thing genetic algos have been used for is the NASA satellite antenna:  https://en.wikipedia.org/wiki/Evolved_antenna

Link to comment
Share on other sites

  • 1 month later...

It's probably not too hard to build a reverse engineered brain if you have the right hardware. Problem is all our processors are basically on a 2d plane, existing processors are very limited when it comes to communicating between different subsections and can only be made in modest sizes meaning increasing power means additional chips. As soon as you have multiple chips you run into limits with buses and latency issues.

That conventional silicon chips have scaled so well over the last few decades is both good and bad, the upside is computing power advanced rapidly the downside is it has made adopting radically different computing hardware less attractive as conventional silicon was considered good enough.

You likely could not program an AI based on a brains, you would have to train it like an animal or child. There would still be huge advantages once trained you could duplicate it, training could take place in VR environments occurring at much higher speeds. An AI replacement for a guide dog could have 100 years worth of training in a VR simulation then be copied into millions of robots.

Silicon transistors compare pretty favorably to biological neurons but with no way to manufacturer complex 3d arrays of them it will be hard to match brains. If you could assemble a 10cm X 10cm X 10cm cube of 14nm transistors and connections set up for running neural networks I imagine it would compare well to biological brains.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...