I'm under the impression that, at least theoretically, Von Neumann's principles of self-replication, game theory, or optimization in the context of designing neural network structures.
You could think about organizing a neural network with layers or nodes that are indexed by Von Neumann ordinals, where the structure of the network follows the natural progression of ordinals. For example:
Each layer or node in the neural network could correspond to a finite ordinal (such as 0, 1, 2, etc.) or transfinite ordinal (like ωω, ω+1ω+1, etc.). The way the network expands and evolves could follow the ordering and progression inherent in the Von Neumann ordinal system.
This could lead to an architecture where early layers (low ordinals) represent simpler, more basic computations (e.g., feature extraction or basic transformations). Later layers (higher ordinals) could correspond to more complex, abstract processing or deeper, more abstract representations.
But I'm afraid there is no hardware substrate upon which to build such a thing.
You could think about organizing a neural network with layers or nodes that are indexed by Von Neumann ordinals, where the structure of the network follows the natural progression of ordinals. For example:
Each layer or node in the neural network could correspond to a finite ordinal (such as 0, 1, 2, etc.) or transfinite ordinal (like ωω, ω+1ω+1, etc.). The way the network expands and evolves could follow the ordering and progression inherent in the Von Neumann ordinal system.
This could lead to an architecture where early layers (low ordinals) represent simpler, more basic computations (e.g., feature extraction or basic transformations). Later layers (higher ordinals) could correspond to more complex, abstract processing or deeper, more abstract representations.
But I'm afraid there is no hardware substrate upon which to build such a thing.