The architecture of artificial neural networks is modeled after the human brain. However, simple biological organisms above a certain complexity are also capable of data classification and analysis. This capability is embedded in them through evolution, rather than training. In this study, we demonstrate how basic functional assumptions, derived from physical and environmental constraints, can lead to the construction of a self-consistent model that explains these capabilities. Furthermore, this model can serve as a foundation for advanced models, as its principles are easily scalable. Naturally, these capabilities develop over time, and we trace their origin until the point where such a network can solve NP-hard problems in polynomial time. The efficient problem-solving demonstrated in this study calls for further quantitative examination of more complex models built on the same principles.