If anyone would like to clarify or provide proof of
13 years ago
Test Header!
In relating the brain (human or animal) to a machine what would more closely relate in terms of operation, the Von Neumann architecture or a Harvard/Modified Harvard architecture. I am not a neuro biologist/psychologist, only as a hobby do I find interest in these things so I am not an expert. I just don't know how the brain works beyond some laymen explanations and lectures. It seems we have a huge pile of data and no coherent brain theory or at least widely accepted yet. That aside my hypothesis involves artificial awareness in machines.
My current hypothesis is the Neumann architecture is the key (to emulating a brain) since I have a feeling that the parallel system stores both opcodes (instructions) and memory (data) in the same place using the same pathways (analogous to a data bus). I do understand that if I were to try to simulate even a simple system linearly to rival the size and distributed processing power of neurons; that it would be very slow unless I had a more capable computing platform. But I do feel that anything "magical"in such a system that quantum mechanics may imply is just not true or required (only an opinion I have no proof atm). Given that a simulated conscious machine should be possible to emulate (based on my hypothesis) on a linear processor given it has sufficient processing power or speed. The whole design of the system would be dependent on if opcodes and memory should be stored in the same manner in program space (analogous to a neuron - axon - synapse - dendrite physical pathway or structure etc).
I however do not see a reason why the Harvard architecture (separating the instructions and memory) would be unable to do the same thing given it could write its own pseudo code to interpret itself (seems less efficient). But I do admit that I fail to see any biological system striving for efficiency only the "just good enough" principle evolution has employed to make things work (fine tuning aside). Because of this I remain uncertain if my hypothesis even has grounds to continue to be worked on at all. My old computer models provided no demonstrable proofs of the hypothesis so it fails to keep me working on the project.
On a side note It is amazing to see that even simple things like pattern recognition and learning actually requires a great deal of computing resources despite biological organisms taking advantage of distributed processing power. If only all the old laptops and desktops I acquired so far had even a fraction of the power of a small bugs brain. Might be-able to employ so short cuts (hopefully this doesn't ruin the system preventing it from emulating the machine intelligence im looking for).
I put this question here because I currently have no other forum or blog to ask such a question at this time. I was hoping maybe just randomly some furry or person glancing at this may possibly be able to provide some insight or at least an opinion. If not at least be interested / peak curious.
yours truly, twilight sparkle.. I joke.
My current hypothesis is the Neumann architecture is the key (to emulating a brain) since I have a feeling that the parallel system stores both opcodes (instructions) and memory (data) in the same place using the same pathways (analogous to a data bus). I do understand that if I were to try to simulate even a simple system linearly to rival the size and distributed processing power of neurons; that it would be very slow unless I had a more capable computing platform. But I do feel that anything "magical"in such a system that quantum mechanics may imply is just not true or required (only an opinion I have no proof atm). Given that a simulated conscious machine should be possible to emulate (based on my hypothesis) on a linear processor given it has sufficient processing power or speed. The whole design of the system would be dependent on if opcodes and memory should be stored in the same manner in program space (analogous to a neuron - axon - synapse - dendrite physical pathway or structure etc).
I however do not see a reason why the Harvard architecture (separating the instructions and memory) would be unable to do the same thing given it could write its own pseudo code to interpret itself (seems less efficient). But I do admit that I fail to see any biological system striving for efficiency only the "just good enough" principle evolution has employed to make things work (fine tuning aside). Because of this I remain uncertain if my hypothesis even has grounds to continue to be worked on at all. My old computer models provided no demonstrable proofs of the hypothesis so it fails to keep me working on the project.
On a side note It is amazing to see that even simple things like pattern recognition and learning actually requires a great deal of computing resources despite biological organisms taking advantage of distributed processing power. If only all the old laptops and desktops I acquired so far had even a fraction of the power of a small bugs brain. Might be-able to employ so short cuts (hopefully this doesn't ruin the system preventing it from emulating the machine intelligence im looking for).
I put this question here because I currently have no other forum or blog to ask such a question at this time. I was hoping maybe just randomly some furry or person glancing at this may possibly be able to provide some insight or at least an opinion. If not at least be interested / peak curious.
yours truly, twilight sparkle.. I joke.
Phocidae
~phocidae
OP
Option B: stop thinking about it (smarter not harder), toss computers in garbage just grow brains! (ya that's not going to happen... yet) "but how do we wash them!" ~ program...
FA+