Neurological Properties to Circumvent AI’s Error Reduction Impasse

Main Article Content

Thaddeus JA Kobylarz*
Erik J Kobylarz

Abstract



Our paper proposes significant changes to AI technology. We believe this is necessary because current implementations have stagnated at average error rates of approximately 8%. Implementers hope that further improvements will lower error rates to 5% by 2025. This would require 1028 floating-point operations, which is not possible with today’s algorithms and computer technology. Even errors of 5% are excessive for many practical applications.


The current AI implementations have ignominious errors. Near bankruptcy of a prominent real estate corporation, and the obligatory resignation of an elected government official resulted from AI errors. The causation errors were ludicrous and unlikely performed by humans. Applications of AI are therefore limited to those for which errors are nugatory.


In contrast, the human brain’s capabilities and efficiency are astonishing. In significant contrast to current AI models, the human brain is impressive in terms of its relatively small size (adult average 79 in3), weight (approximately 4#), and power consumption (nominally 15W). We feel that this implies that AI technology needs to adopt excluded neurological properties.


The current AI neuron model is an overly simplified linear model, which was proposed about 70 years ago. We propose emulating the neurological neuron’s nonlinear capabilities. The versatility of the improved AI model would be many orders of magnitude beyond that of the currently implemented linear neuron models.


Also, the proposed neurological properties are of neural plasticity. Specifically, we describe the neurological associative learning aspect of neuroplasticity, partitioning associative plasticity into “inter-association” (neural network structure), and “intra-association” (neuron functioning).



Downloads

Download data is not yet available.

Article Details

Kobylarz, T. J., & Kobylarz, E. J. (2023). Neurological Properties to Circumvent AI’s Error Reduction Impasse. Trends in Computer Science and Information Technology, 8(3), 061–072. https://doi.org/10.17352/tcsit.000070
Review Articles

Copyright (c) 2023 Kobylarz TJA, et al.

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Licensing and protecting the author rights is the central aim and core of the publishing business. Peertechz dedicates itself in making it easier for people to share and build upon the work of others while maintaining consistency with the rules of copyright. Peertechz licensing terms are formulated to facilitate reuse of the manuscripts published in journals to take maximum advantage of Open Access publication and for the purpose of disseminating knowledge.

We support 'libre' open access, which defines Open Access in true terms as free of charge online access along with usage rights. The usage rights are granted through the use of specific Creative Commons license.

Peertechz accomplice with- [CC BY 4.0]

Explanation

'CC' stands for Creative Commons license. 'BY' symbolizes that users have provided attribution to the creator that the published manuscripts can be used or shared. This license allows for redistribution, commercial and non-commercial, as long as it is passed along unchanged and in whole, with credit to the author.

Please take in notification that Creative Commons user licenses are non-revocable. We recommend authors to check if their funding body requires a specific license.

With this license, the authors are allowed that after publishing with Peertechz, they can share their research by posting a free draft copy of their article to any repository or website.
'CC BY' license observance:

License Name

Permission to read and download

Permission to display in a repository

Permission to translate

Commercial uses of manuscript

CC BY 4.0

Yes

Yes

Yes

Yes

The authors please note that Creative Commons license is focused on making creative works available for discovery and reuse. Creative Commons licenses provide an alternative to standard copyrights, allowing authors to specify ways that their works can be used without having to grant permission for each individual request. Others who want to reserve all of their rights under copyright law should not use CC licenses.

Huang A. Zillow utilizes explainer AI, data to revolutionize how people sell houses. The Machine, Making sense of AI. Venture Beat. 2021 July 14. https://venturebeat.com/ai/zillow-utilizes-ai-data-to-revolutionize-how-people-sell-houses/

Soper T. Zillow to shutter home buying business and lay off 2,000 employees as its big real estate bet falters. 2021 November 2. GeekWire. https://www.geekwire.com/2021/Zillow-shutter-home-buying-business-lay-off-2k-employees-big-real-estate-bet-falters/

Cook J. Why the iBuying algorithms failed Zillow, and what it says about the business world’s love affair with AI., GeekWire. 2021 Nov 3. https://www.geekwire.com/2021/iBuying-algorithms-failed-Zillow-says-business-worlds-love-affair-ai/

Rahul R, The Dutch tax authority was felled by AI - What comes next? European regulation hopes to rein in ill=behaving algorithms. IEEE Spectrum. 2022 May 9.

Hendrycks D. Measuring mathematical problem solving with the MATH Ddataset. Computer Science. Machine Learning, Cornell University. 2021 Nov 8.

Roberts M. Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Nat Mach Intell. 2021; 3:199-217. doi: 10.1038/s42256-021-00307-0

Strickland E. Racial bias found in algorithms that determine health care for millions of patients. Researchers argue for audit systems to catch cases of algorithmic bias. IEEE Spectrum. 2019.

Choi C. Some AI Systems may be impossible to compute. New research suggests there are limitations to what deep neural networks can do. IEEE Spectrum. 2022.

Choi C. 7 revealing ways AIs fail: neural networks can be disastrously brittle, forgetful, and surprisingly bad at math. IEEE Spectrum. 2021 Oct;58(10): 42-47. doi: 10.1109/MSPEC.2021.9563958. .

Thompson N. Deep learning’s diminishing returns: the cost of improvement is becoming unsustainable. IEEE Spectrum. 2021 Oct 8 58(10):50-55. doi: 10.1109/MSPEC.2021.9563954

National Highway Traffic Safety Administration. NHTSA Estimates for 2022 Show Roadway Fatalities Remained Flat After Two Years of Dramatic Increases., United States Department of Transportation Report, 2023 April 20. https://www.nhtsa.gov/press-releases/traffic-crash-death-estimates-2022

Smith MS. Google and Microsoft race to unveil AI-based search. But both Bard and Bing Chat stumble out of the gate. IEEE Spectrum. 2023.

Olson E. Google shares drop $100 billion after its new AI chatbot makes a mistake. NPR Technology, 2023 Feb 9 2023. https://www.npr.org/2023/02/09/1155650909/google-chatbot--error-bard-shares

Hao K. Training a single AI model can emit as much carbon as five cars in their lifetimes. MIT Technology Review. 2019 Jun 6. https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/

Hutson M. Measuring AI’s Carbon Footprint. New tools track and reduce emissions from machine learning. IEEE Spectrum. 2022.

Edmund L. AI’s carbon footprint problem. Stanford Engineering. 2020.

Henry CJ. Basal metabolic rate studies in humans: measurement and development of new equations. Public Health Nutr. 2005 Oct;8(7A):1133-52. doi: 10.1079/phn2005801. PMID: 16277825.

McCulloch WS, Pitts W. A logical calculus of ideas immanent in nervous activity. Bull Math Biophys. 1943; 5(4):115-133. doi:10.1007/BF02478259.

Hebb D. The Organization of Behavior: A Neuropsychological Theory. New York: Wiley and Sons; 1949.

ROSENBLATT F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev. 1958 Nov;65(6):386-408. doi: 10.1037/h0042519. PMID: 13602029.

Mok K. Why Artificial Intelligence Needs Neuroscience for Inspiration. The New Stack. 2017 Aug 14. https://thenewstack.io/stronger-artificial-intelligence needs-neuroscience-inspiration/

Costandi M Neuroplasticity, MIT Press: Cambridge, MA. 2016.

Hensch TK, Bilimoria PM. Re-opening Windows: Manipulating Critical Periods for Brain Development. Cerebrum. 2012 Jul;2012:11. Epub 2012 Aug 29. PMID: 23447797; PMCID: PMC3574806.

Nigam VP, Graupe D. A neural-network-based detection of epilepsy. Neurol Res. 2004 Jan;26(1):55-60. doi: 10.1179/016164104773026534. PMID: 14977058.

Carter R. The Human Brain Book. New York: DK Publishing; 2009; 70, 156, & 193.

Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci U S A. 1982 Apr;79(8):2554-8. doi: 10.1073/pnas.79.8.2554. PMID: 6953413; PMCID: PMC346238.

Kandel ER, Schwartz JH, Jessell TM, Siegelbaum SA, Hudspeth AJ, Mack S (Eds.). Principles of Neural Science. 5th Ed. New York: McGraw Hill; 2013.

Sternberg RJ, Sternberg K. Cognitive Psychology. 6th Ed. Belmont, CA: Wadsworth/Cengage Learning. 2012.

Craighead WE. Nemeroff CB. The Concise Corsini Encyclopedia of Psychology and Behavioral Science. New York: John Wiley & Sons. 2004 Apr 19. Gestalt Psychology. 401-404.

Lehar S. The World in your Head: A Gestalt View of the Mechanism of Conscious. 1st Ed. Psychology Press: London. 2003.

Minsky M, Papert S. Perceptrons: An Introduction to Computational Geometry. 2nd Ed. MIT Press: Cambridge MA; 1972.

Gruzling N. Linear separability of the vertices of an n-dimensional hypercube. M.Sc. Thesis. University of Northern British Columbia. 2001.

Kriesel DA Brief Introduction to Neural Networks. Published online Bonn, Germany. 2007. https://dkriesel.com/en/science/neuroal_networks

Kobylarz TJ, Bradley W. Adaptation in linear and non-linear threshold models of neurons. IEEE Transactions on Information Theory and the International Symposium on Information Theory, San Remo, Italy. 1968 and 1967.

Rojas R. Neural Networks, A systematic introduction. Berlin: Springer-Verlag, Berlin. 1996.

Kobylarz TJA, Kobylarz EJ. A general nonlinear neuron model. Eastern Association of Electroencephalographers, 74th Annual Meeting. 2020 February 14-15. Clin Neurophysiol. 2021; 132(9): e1. doi: 10.1016/j.clinph.2021.03.026

Kobylarz TJA, Kobylarz EJ. Potential means to circumvent a neural network performance impasse. Eastern Association of Electroencephalographers, 76th Annual Meeting. Clin Neurophysiol. 2022.

Kanold P. Study identifies first brain cells that respond to sound. Science News. 2017 Dec 21. https://www.sci.news/othersciences/neuroscience/subplate-neurons-sound-05555.html.

Shier D. Hole's Human Anatomy & Physiology. 14th Ed. New York: McGraw-Hill Education. 2016.

Gidon A, Zolnik TA, Fidzinski P, Bolduan F, Papoutsi A, Poirazi P, Holtkamp M, Vida I, Larkum ME. Dendritic action potentials and computation in human layer 2/3 cortical neurons. Science. 2020 Jan 3;367(6473):83-87. doi: 10.1126/science.aax6239. PMID: 31896716.

Yu Y, Shu Y, McCormick DA. Cortical action potential backpropagation explains spike threshold variability and rapid-onset kinetics. J Neurosci. 2008 Jul 16;28(29):7260-72. doi: 10.1523/JNEUROSCI.1613-08.2008. PMID: 18632930; PMCID: PMC2664555.

Platkiewicz J, Brette R. A threshold equation for action potential initiation. PLoS Comput Biol. 2010 Jul 8;6(7):e1000850. doi: 10.1371/journal.pcbi.1000850. PMID: 20628619; PMCID: PMC2900290.