Our server costs ~$56 per month to run. Please consider donating or becoming a Patron to help keep the site running. Help us gain new members by following us on Twitter and liking our page on Facebook!
Current time: November 27, 2024, 8:00 pm

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Supermathematics and Artificial General Intelligence
#1
Supermathematics and Artificial General Intelligence
[Image: 7NbZgH8.gif]

This thread concerns attempts to construct artificial general intelligence, which I often underline may likely be mankind's last invention.

(For more on the last invention comment above, see "Artificial General Intelligence: Humanity's Last Invention | Ben Goertzel")


I am asking anybody that knows supermathematics and machine learning to pitch in the discussion below.



PART A
Back in 2016, I read somewhere that babies know some physics intuitively. (See "Your baby is doing little physics experiments all the time, according to a new study")
Also, it is empirically observable that babies use that intuition to develop abstractions of knowledge, in a reinforcement learning like manner.



PART B
Now, I knew beforehand of two types of major deep learning models, that:

(1) used reinforcement learning. (Deepmind Atari q)
(2) learn laws of physics. (Uetorch)

However:

(a) Object detectors like (2) use something called pooling to gain translation invariance over objects, so that the model learns regardless of where the object in the image is positioned
(b) Instead, (1) excludes pooling, because (1) requires translation variance, in order for Q learning to apply on the changing positions of the objects in pixels.


PART C
As a result I sought a model that could deliver both translation invariance and variance at the same time, and reasonably, part of the solution was models that disentangled factors of variation, i.e. manifold learning frameworks.

I didn't stop my scientific thinking at manifold learning though.

Given that cognitive science may be used to constrain machine learning models (similar to how firms like Deepmind often use cognitive science as a boundary on the deep learning models they produce) I sought to create a disentanglable model that was as constrained by cognitive science, as far as algebra would permit.



PART D

As a result I created something called the Supermanifold hypothesis in deep learning (a component in another description called 'thought curvature'). 

This was due to evidence of supersymmetry in cognitive science; I compacted machine learning related algebra for disentangling, in the regime of supermanifolds. This could be seen as an extension of manifold learning in artificial intelligence.

Given that the supermanifold hypothesis compounds ϕ(x,θ,[Image: ncrjUdkm.png])Tw , here is an annotation of the hypothesis:

  1. Deep Learning entails ϕ(x;θ)Tw, that denotes the input space x, and learnt representations θ.
  2. Deep Learning underlines that coordinates or latent spaces in the manifold framework, are learnt features/representations, or directions that are sparse configurations of coordinates.
  3. Supermathematics entails (x,[Image: PRSAGxn.png],[Image: ncrjUdkm.png]) that denotes some x valued coordinate distribution, and by extension, directions that compact coordinates via [Image: PRSAGxn.png][Image: ncrjUdkm.png].
  4. As such, the aforesaid (x,[Image: PRSAGxn.png],[Image: ncrjUdkm.png]), is subject to coordinate transformation.
  5. Thereafter 1, 2, 3, 4 and supersymmetry in cognitive science (i.e. paper: "Supersymmetric Methods in the travelling variable: inside neurons..."), within the generalizable nature of euclidean space, reasonably effectuate ϕ(x;[Image: PRSAGxn.png],[Image: ncrjUdkm.png])Tw.

QUESTIONS:

Does anybody here have good knowledge of supermathematics or related field, to give any input on the above?

If so is it feasible to pursue the model I present in supermanifold hypothesis paper?

And if so, apart from the ones discussed in the paper, what type of pˆdata (training samples) do you garner warrants reasonable experiments in the regime of the model I presented?
Reply
#2
RE: Supermathematics and Artificial General Intelligence
Machine Learning is very popular right now because there is a lot of money to be made from statistical analysis of big data sets. This is what is driving the research, it is not the route to strong generalised artificial intelligence. But a new EU regulation (GDPR) comes into force in May 2018 that requires all automated decisions can be questioned and require an explanation. It's always been hard enough to convince clients to use a simple back-propagation three layer network that performs one statistical function, to explain how a deep learning model functions by extracting out hundreds of salient variables will be near impossible.

Like all AI techniques, machine learning has its limitations.
Reply
#3
RE: Supermathematics and Artificial General Intelligence
I know something or other about how to do supersymmetry and superspace. What I don't understand is how you propose to use Grassmann variables in learning, because you don't say how.
The fool hath said in his heart, There is a God. They are corrupt, they have done abominable works, there is none that doeth good.
Psalm 14, KJV revised edition

Reply
#4
RE: Supermathematics and Artificial General Intelligence
(September 5, 2017 at 4:24 am)Alex K Wrote: I know something or other about how to do supersymmetry and superspace. What I don't understand is how you propose to use Grassmann variables in learning, because you don't say how.

Alex, at least, from ϕ(x;θ)Tw, or the machine learning paradigm:

In the machine learning regime, something like the following applies:
  • Points maintain homeomorphisms, such that for any point p under a transition T on some transformation/translation (pertinently continuous, inverse function) t, p0 (p before T) is a bijective inverse for p1 (p after T); on t.
  • Following the above, topologies maintain homeomorphisms, for any collection of points W (eg a matrix of weights), under some transition T on some transformation/translation sequence (pertinently continuous, inverse functions) s, W0(W before T) is a bijective inverse for W1(W after T); on s, where for any representation of W, determinants are non-zero.
  • Now, topological homeomorphisms maintain, until linear separation/de-tangling, if and only if neural network dimension is sufficient (A small example: 3 hidden units at minimum, for 2 dimensional W)
  • Otherwise, after maintaining homeomorphism at some point, while having insufficient dimension, or insufficient neuron firing per data unit, in non-ambient isotopic topologies that satisfy NOTE(ii) W shall eventually yield zero determinant, thus avoiding linear separation/unentangling. At zero determinant, unique solutions for scalar multiplications dissolve, when the matrix becomes non-continuous, or non-invertible.
  1. NOTE(i): Entangled is the point before which some unentangleable classes are unentangled/made linearly separable.
  2. NOTE(ii): Unique solutions in matrices are outcomes that resemble data sets; for homeomorphisms (topologies: where zero-determinant continuous invertible transformations/translations engender OR ambient isotopies: where positive/nonsingular determinants, nueron permutations, and 1 hidden unit minimum occurs, i.e for 1-dimensional manifold, 4 dimensions are required)
Beyond my words above. look at what Chris Olah, or others present.

Chris Olah: "Neural Networks, Manifolds, and Topology"
Kihyuk Sohn et. al: "Learning to Disentangle Factors of Variation with Manifold Interaction"


FOOTNOTE:
You are correct though, I don't know much about supermathematics at all, but based at least, on the generalizability of manifolds and supermanifolds, together with evidence that supersymmetry applies in cognitive science, I could formulate algebra with respect to the deep learning variant of manifolds.

This means that given the nature of supermanifolds and manifolds, there is no law preventing ϕ(x;θ,[Image: ncrjUdkm.png])Tw, some structure in euclidean superspace that may subsume pˆdata (real valued training samples).
Reply
#5
RE: Supermathematics and Artificial General Intelligence
Get freaking banned already Mr "Non-Beliefism."

NONSENSE! ARE YOU THEISTIC?

Get freaking banned already.
Reply
#6
RE: Supermathematics and Artificial General Intelligence
(September 5, 2017 at 2:28 am)Mathilda Wrote: Machine Learning is very popular right now because there is a lot of money to be made from statistical analysis of big data sets. This is what is driving the research, it is not the route to strong generalised artificial intelligence. But a new EU regulation (GDPR) comes into force in May 2018 that requires all automated decisions can be questioned and require an explanation. It's always been hard enough to convince clients to use a simple back-propagation three layer network that performs one statistical function, to explain how a deep learning model functions by extracting out hundreds of salient variables will be near impossible.

Like all AI techniques, machine learning has its limitations.

Mathilda, in contrast, research is going in directions largely concerning very general algorithms, or general intelligence.

Don't forget about unsupervised learning models that already exist today (and are only improving):

(1) Manifold learning or Deepmind's "Early Visual Concept Learning with Unsupervised Deep Learning"

(2) Generative Adversarial Networks that uses unsupervised learning (See Wikipedia "Generative_adversarial_networks")

etc.


EDIT:
What did you mean by three layers?
Don't forget about residual neural networks, and other stochastic models, that can do thousands of layers. (See 2016 arXiv paper: "Deep Networks with Stochastic Depth")
Even I myself, have configured optimally converging residual neural nets with 20 layers, with a lowed nvidia card.
Reply
#7
RE: Supermathematics and Artificial General Intelligence
I sent you an extremely imporant and extremely serious PM, Mr.ThoughtCurvature, please do open it.
Reply
#8
RE: Supermathematics and Artificial General Intelligence
(September 5, 2017 at 5:37 am)ThoughtCurvature Wrote: Mathilda, in contrast, research is going in directions largely concerning very general algorithms, or general intelligence.

Don't forget about unsupervised learning models that already exist today (and are only improving):

(1) Manifold learning or Deepmind's "Early Visual Concept Learning with Unsupervised Deep Learning"

(2) Generative Adversarial Networks that uses unsupervised learning (See Wikipedia "Generative_adversarial_networks")


I've been working on unsupervised learning models for over 20 years but thanks for telling me that they exist.

While I agree that unsupervised learning is a step in the right direction towards artificial general intelligence, or strong AI, there is a big difference between a AI that can learn a static data set unsupervised and one that can act, learn and relearn in real time. This is important because the real-world is not a static data set.

I am also well aware of the need for generalisation. As far as I am concerned, this is the key reason why so much AI fails. People get excited by initial results but don't realise that the challenge actually lays in scaling it up because by doing so the problem domain grows exponentially.

You also did not respond to my point that about EU's GDPR which demands that automated decision making needs to provide explanations which will dramatically reduce the profitability of the current deep learning approach. It is precisely because neural networks lack explanatory power that I have stopped using them myself.
Reply
#9
RE: Supermathematics and Artificial General Intelligence
He's STILL not banned? Holy crap. If he stuck out anymore he'd erupt.
Reply
#10
RE: Supermathematics and Artificial General Intelligence
Ever hear of the radical concept of waiting? Jeez, I'm supposed to be on holiday, myself.
At the age of five, Skagra decided emphatically that God did not exist.  This revelation tends to make most people in the universe who have it react in one of two ways - with relief or with despair.  Only Skagra responded to it by thinking, 'Wait a second.  That means there's a situation vacant.'
Reply



Possibly Related Threads...
Thread Author Replies Views Last Post
  Artificial Rain Technology Silver 13 1682 August 10, 2021 at 1:01 pm
Last Post: Spongebob
  Artificial Photosynthesis The Grand Nudger 2 745 March 6, 2018 at 8:47 am
Last Post: The Grand Nudger
  Artificial Neural Networks for 'kids' Face2face 0 634 November 24, 2017 at 2:07 am
Last Post: Face2face
  Human defeated by Artificial Intelligence, from confidence to utter powerlessness causal code 15 3165 October 29, 2017 at 5:16 am
Last Post: I_am_not_mafia
  Genius Edward Witten, could he help to intensify artificial intelligence research? ThoughtCurvature 1 1136 September 5, 2017 at 5:29 am
Last Post: Edwardo Piet
  Let's measure intelligence in 2016 ErGingerbreadMandude 19 4174 March 24, 2016 at 12:07 pm
Last Post: downbeatplumb
  The general wisdom is that Attention Deficit Hyperactivity plawnkhan 12 3344 October 10, 2015 at 8:03 am
Last Post: brewer
  Intelligence, Creativity, and a Touch of Madness Rayaan 11 4385 October 25, 2014 at 8:08 am
Last Post: Thumpalumpacus
  Creationist Vs Scientist On Why Human Intelligence Is Declining Gooders1002 0 1239 March 29, 2014 at 1:08 pm
Last Post: Gooders1002
  General Info on Migraines thesummerqueen 2 1083 October 14, 2011 at 9:21 am
Last Post: thesummerqueen



Users browsing this thread: 1 Guest(s)