Supermathematics and Artificial General Intelligence
September 5, 2017 at 2:13 am
(This post was last modified: September 5, 2017 at 4:13 am by ThoughtCurvature.)
![[Image: 7NbZgH8.gif]](https://i.imgur.com/7NbZgH8.gif)
This thread concerns attempts to construct artificial general intelligence, which I often underline may likely be mankind's last invention.
(For more on the last invention comment above, see "Artificial General Intelligence: Humanity's Last Invention | Ben Goertzel")
I am asking anybody that knows supermathematics and machine learning to pitch in the discussion below.
PART A
Back in 2016, I read somewhere that babies know some physics intuitively. (See "Your baby is doing little physics experiments all the time, according to a new study")
Also, it is empirically observable that babies use that intuition to develop abstractions of knowledge, in a reinforcement learning like manner.
PART B
Now, I knew beforehand of two types of major deep learning models, that:
(1) used reinforcement learning. (Deepmind Atari q)
(2) learn laws of physics. (Uetorch)
However:
(a) Object detectors like (2) use something called pooling to gain translation invariance over objects, so that the model learns regardless of where the object in the image is positioned
(b) Instead, (1) excludes pooling, because (1) requires translation variance, in order for Q learning to apply on the changing positions of the objects in pixels.
PART C
As a result I sought a model that could deliver both translation invariance and variance at the same time, and reasonably, part of the solution was models that disentangled factors of variation, i.e. manifold learning frameworks.
I didn't stop my scientific thinking at manifold learning though.
Given that cognitive science may be used to constrain machine learning models (similar to how firms like Deepmind often use cognitive science as a boundary on the deep learning models they produce) I sought to create a disentanglable model that was as constrained by cognitive science, as far as algebra would permit.
PART D
As a result I created something called the Supermanifold hypothesis in deep learning (a component in another description called 'thought curvature').
This was due to evidence of supersymmetry in cognitive science; I compacted machine learning related algebra for disentangling, in the regime of supermanifolds. This could be seen as an extension of manifold learning in artificial intelligence.
Given that the supermanifold hypothesis compounds ϕ(x,θ,
![[Image: ncrjUdkm.png]](https://i.imgur.com/ncrjUdkm.png)
- Deep Learning entails ϕ(x;θ)Tw, that denotes the input space x, and learnt representations θ.
- Deep Learning underlines that coordinates or latent spaces in the manifold framework, are learnt features/representations, or directions that are sparse configurations of coordinates.
- Supermathematics entails (x,
,
) that denotes some x valued coordinate distribution, and by extension, directions that compact coordinates via
,
.
- As such, the aforesaid (x,
,
), is subject to coordinate transformation.
- Thereafter 1, 2, 3, 4 and supersymmetry in cognitive science (i.e. paper: "Supersymmetric Methods in the travelling variable: inside neurons..."), within the generalizable nature of euclidean space, reasonably effectuate ϕ(x;
,
)Tw.
QUESTIONS:
Does anybody here have good knowledge of supermathematics or related field, to give any input on the above?
If so is it feasible to pursue the model I present in supermanifold hypothesis paper?
And if so, apart from the ones discussed in the paper, what type of pˆdata (training samples) do you garner warrants reasonable experiments in the regime of the model I presented?