Jump to content

Recommended Posts

Posted (edited)

image.png.628feabc62d2b9b18014ebec5aa42de8.png

Two years ago, I myself was one of the lucky people that wrote a basic artificial neural net in python from scratch, and got it to run but I didn't really understand why some of the code was working.

I'd been following a channel called 3blue1brown for math stuff, and I recently saw a youtube video about neural nets by the same channel.

That 3blue1brown's neural net series is surprisingly very clear, I've never seen anything that clear in all my life.

image.thumb.png.208fa616831fcded3442f2983cae7df2.png

What's even cooler, is that I found a tutorial on 3blue1brown's reddit about neural nets inspired by 3blue1brown's channel, that makes 3blue1brown's neural net series even clearer.

The tutorial's is on amazon but, it's also free on quora. (The image above is from  the amazon link)

If you've never got a neural net to work or even if you're an expert, check out the 3blue1brown's neural net series, and check out Jordan's free quora tutorial.

 

Edited by thoughtfuhk
Posted (edited)

For those that may be stunned by the thousands of nuts and bolts in elementary neural nets considerable while writing a neural network from scratch, consider these words from the quora tutorial:

main-qimg-a86737c1b91c0948e95dfa903ea34b

Although all the nuts and bolts of an elementary artificial neural net are easy to understand in the long run, the neural net unfortunately consists of thousands of moving parts, so it is perhaps tedious to grasp the whole picture.

As such, the entirety of an elementary yet powerful artificial neural network can be compacted into merely 3 parts:

1) “Part A — Trailing hypersurfaces” & “Training set averages”:

https://i.imgur.com/AeVOawT.png

2) “Part B — Partner neuron sums — An emphasis on “trailing hypersurfaces”:

https://i.imgur.com/lUtg01z.png

3) “Part C — Error correction — Application of costs from the trailing hypersurfaces”:

https://i.imgur.com/UgXMjsm.png

Notably, Part B is merely a way to clarify part A, so basically the neural network is just 2 things:

1) A sequence of “hypersurface” computations wrt to some cost function.

2) An application of costs aka “negative gradients” (using the hypersurface computations) to update the neural network structure as it is exposed to more and more training examples. And thus the neural net improves over time.

 

Edited by thoughtfuhk
Posted (edited)
On 11/16/2017 at 0:38 PM, fiveworlds said:

I'd rather use

https://software.intel.com/en-us/ai-academy/students

They have tutorials on a lot of the major frameworks.

Those tutorials at ai-academy appears to abstract or hide away all the crucial stuff.

This is not the path you want to take if you want to perhaps contribute to the field in a non trivial way.

You're going to need to understand what is actually going on underneath, to attempt to make some fundamental contributions, or do something truly novel.

Edited by thoughtfuhk
Posted
Quote

Those tutorials at ai-academy appears to abstract or hide away all the crucial stuff.

You can't possibly have read all that already so don't pretend that you have

Quote

This is not the path you want to take if you want to perhaps contribute to the field in a non trivial way.

Intel, IBM and Google take massive strides in the field of AI. That tutorial teaches you what they use.

Quote

You're going to need to understand what is actually going on underneath, to attempt to make some fundamental contributions, or do something truly novel.

Your tutorial has ZERO examples of actual code it is just useless maths.

 

Posted
On 11/21/2017 at 9:01 AM, fiveworlds said:

You can't possibly have read all that already so don't pretend that you have

Intel, IBM and Google take massive strides in the field of AI. That tutorial teaches you what they use.

Your tutorial has ZERO examples of actual code it is just useless maths.

 

1) I don't need to read all of the ai-academy ml content, to see that it could be clearer. Reading the introductory chapters, tells you a lot about the remaining content.

2) I am not saying you shouldn't use ai-academy, all I am saying is that ai academy can be supplemented by Artificial Neural nets for kids.

3) The tutorial's quora page does talk about a scratch written neural net done by the author 2 years ago, but the tutorial is not code specific. The tutorial however does heavily describe matrix compatible explanations. I think some code should be included in the tutorial though.

Anyway, how would you describe back propagation in your own words, in terms of math too, without thinking about a specific programming language? 

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.