sciwiz12
Senior Members-
Posts
54 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by sciwiz12
-
I don't know how this math thing became the Center of the discussion while everything concerning AI is overlooked. But to address the point briefly any positive integer, say 5 for a simple example, is composed of a number of ones equal to the value of the integer. 5 = 1+1+1+1+1 Such is the case for all positive integers, which is what I mean when I say they can all be broken down into ones. Any positive integer can be expressed as a geometric series of ones. Is geometric series the right word? All added together? You know what I mean.
-
More thoughts occurred to me, as I am reviewing some math that has become quite rusty for me. I'm thinking about it in terms of an artificial intelligences ability to conceptualise. I think an AI should have some separate components for performing mathematical operations and an interface between language and math components. Now I'm building a more complex notion of what it is for a computer to have an idea. For instance the concept of an apple should contain within it a reference to a geometry method with apple specific parameters under the name appleGeom or something. It could store data concerning an apple's color based on light values etc... So that it could use that information to think about how apples would behave under various circumstances. This gives us a sense of imagination, it can create ideas by taking what it knows and testing it under different scenarios to simulate what would take place. It doesn't have to create a visual representation, or show a graph beyond testing and debugging, but it would have to store the data based on the type of simulation. I know I'm being vague because this concept alone takes a lot of work starting from even basic mathematical concepts and it requires a solid fundamental understanding of physics and geometry as well as algebra and calculus even for just a basic version, but given the ability to run mathematics and physics simulations the next step would be to relate these abilities to language and to it's understanding of itself, the world, and other people and to manage the interactions such that at no point does the machine attempt to process more data than it can handle in a short timespan. Again, I think the trick is to start with a few basic words that are predefined and tied to scripts containing information pulled from the more abstract math and science scripts. Upon being given a new word and its relationship to previous words it must attempt to figure out where this thing belongs in the world, perhaps through a 20 questions style game with the user, or perhaps through context. Perhaps it could use a method for determining when a word it previously didn't understand has enough information to attempt to understand what it is. It can design a class around it and store the class information in a human readable file pending approval awaiting correction and grading. Having been corrected it can incorporate the class into its knowledge base and over time use some statistical model and seek patterns and differences between incorrect guesses and correct guesses to determine if some part of its understanding of the world is wrong and suggest corrections and updates. Now I'm really getting vague. Yet for me this is starting to come together, before it can create a concept of itself, the world, etc... It classes containing methods and data concerning the various areas of math and science. It also needs some of the previously discussed methods in order to parse at least English. Possibly also a class to facilitate the concept of language itself. From these it can build changing models of itself, the world, and the users. As it parses more English it needs to be able to think about new words first in terms of English to help it understand the category of things to which a word belongs so that it can pinpoint the category of scientific and mathematical knowledges it can use to create a model of the object and describe the place of the object in the world and understand the actions and operations that can be performed on it. It'll also need a way to update object and action concepts when it has added new information and a way to output information based on its understanding of the world. It needs a way to write an essay addressed to the user to demonstrate the ways in which its understanding has evolved. Obviously I still have a lot to consider, more than before, but I think I'm sniffing down the right path.
-
Thanks you guys, and a particular thanks to studio for his book recommendations. I'm feeling more confident already.
-
I mean we're eventually going to see that crossover, eventually we will be able to understand how the physical process results in the experience. I suspect we'll basically resolve that consciousnesses, sensation and perception are all composite experiences. That there really is no consciousness but instead so much simultaneous information processing that it gets confused so that it doesn't seem like the simple operations of a machine. Like we compile the data into a recreation of what the world outside is like, a simulation that is used to coordinate actions, and simultaneously a lot of other processes are taking place to decide how to feel, what to think, etc... And there's so much extra processing going on that we convince ourselves that there's a ghost of the self looking through eyes when really it's just simultaneous processes. That's all speculation but if I had to guess how it would all shake out...
-
I've heard of fuzzy logic and have seen an example discussed in terms of maglev train rails in Japan. I haven't delved into the math yet because I didn't pay a lot of attention in math and I have to review a lot of subjects. I'm assuming it borrows from modular arithmetic, probability, and possibly combinatorics in order to generate pseudo-random values in order to make certain calculations for the decision making process? Something along those lines? I vaguely remember some discrete math, but maybe I should just shut up and read the book, and thanks for the recommendation.
-
I wanted to think about free will a little bit, firstly discussing decision making in terms of predetermination and true randomness, and secondly discussing consciousness simulation and will. For any given decision based on a set of choices, the selection is either determined by the input or it is random such that the same inputs will give unpredictable outputs. If a decision is predetermined then I can only freely choose the outcome if I can freely choose the input. Even recursive decision making still would utilise inputs based on initial conditions. Likewise if outputs are random or even based on truly random initial conditions then I am still the slave to chance. Thus if I make a decision, whether that decision is made based directly on input or based even partially on chance the choice cannot truly be free for me to make. That said a possible free will can be derived in a sense. Imagine the concept of the self as a class in object oriented programming, a collection of information coupled with a set of instructions which the brain can utilise to simulate the self and to make decisions according to the concept of the self. All may still be calculated deterministically but there is a sense of free will if we imagine the brain attempting to simulate free will according to a conception of the self using pseudo-randomness to make certain calculations in order to make approximately random decisions from a range of suitable options partially based on stored information pertaining to will and preference. Another question occurs to me, however. Even in the above model does this concept of a man have free will? Rather, is he the author of his will? If only factors outside of himself determine his will then he is not the author of his will, and if he wills that his will should change then it merely shifts the question. His initial will must be predetermined by external factors, and every subsequent alteration to will is made from the initial will. That said one could conceivably construct a more intelligently designed will by taking information learned through experience and desires from the initial will to design and construct a new will to then undergo transition from old will to new will, yet even so the new will is determined from the old will determined from initial conditions, possibly with elements of true randomness, but certainly not authored by one's own self. Now I'm not so bold as to claim that this is objectively how we as humans work, simply attempting to construct a model which if similar to humans could partially explain the experience of free will, an ability to, from preconditions and with knowledge, construct a new will from an older will resulting in the perception that one's choices are one's own. Can a man then be held responsible for his actions, in the model I've concocted above in a way he cannot, as he could not control preconditions nor the results of chance. In another sense, however, he could obtain knowledge along with the pre-existing will and create a new will such that it conforms more closely with what knowledge tells him he should want. The more this process would occur the further removed the man would be from his initial conditions and the more like a free man he could become, by constantly updating his will he is never truly free from external control but he evolves into himself, blurring the lines at least between fate and freedom. Of course, that may not in any way reflect how we operate, and my reasoning may be in some way faulty, but it at least provides a model for consideration, assuming it isn't stupid.
-
Does mathematics really exist in nature or not?
sciwiz12 replied to seriously disabled's topic in General Philosophy
OK, I think I see part of the problem, you want to talk about interneuronal activity in the same vein as language. So I'll grant you that neurons do transmit information, so in a sense this is a form of communication, but it's here that I will draw the line between almost bit like data transfer and language in a traditional sense. For instance the receiving nerve cells don't understand the information they receive, they just pass it along according to a chemically mechanistic process. It'd be like forming a line of people to pass a bucket of water or sand, or stuff in general. You never look at the bucket, you don't think about it, you just pass it to the next guy in line. Any word I think of in any language however has multiple data points and possible reference sources. If I say "Apple" you have to access all of the information concerning reference apples you've experienced, the color red, the sweet taste of juicy apples, the notion that an apple can rot, etc... You have to turn that data into a reference you understand as being represented by the word used. That's not all of course because to really make it a language I need a succession of multiple words with various references each to provide context. Other vocalisers and communicators may have more rudimentary data exchange but if it does not convey specific enough information then it cannot be said to be a language, and if it can convey specific information to another that understands that communication then it is not akin to the way the brain processes information. Also, many species reproduce asexually, arguably a lot more than those that do so sexually, also most animals don't mate for life, that's pretty rare. In general you're presenting us with fallacious reasoning, even "perfect" hives do nothing to prove your claims about language, it's not even evidence really, or an argument, it's a question and the answer for most of those questions has nothing to do with language. Let me put it to you this way, a computer doesn't really use languages, the languages exist on a layer of abstraction. A computer uses a complex series of electrical components which can be represented as information which produces layers of abstraction on which language can be implemented. Human brains and animal communicators are similar in this way, the nervous system runs a layer of abstraction that can act as an interface for language, language isn't inherent in the inner workings of the brain but the processes of the brain can use data in an organised fashion to interface with others in the world so as to communicate in a language on a level of abstraction which is decoded from language back into information which the brain processes not as a language but electromechanically and chemically. -
Does mathematics really exist in nature or not?
sciwiz12 replied to seriously disabled's topic in General Philosophy
Yeah, I think we can all agree that was pretty bold of you to make those kinds of broad, generalising statements about how the brain operates and interplays with language as well as the nature of consciousness. I mean it is fair to say such claims require empirical data, I'm wondering what your thought process was on that one. Did you intuit that information and assume that your intuitively derived claims would invalidate previous contentions? Did you actually read that in a peer reviewed study or observe this in a peer reviewed study on the brain and language? If so it is incredible news as I was not aware such in depth and insightful studies could be performed even with today's advanced brain imaging equipment. I was pretty sure we could only approximate which general areas of the brain were responsible for specific processes and I'm certainly surprised to learn we've discovered and can track the variations in location of consciousness itself within the brain from person to person. The whole thing? Really? I'm being sarcastic, but you get the idea. I can sympathise with you, it's natural to say what feels like it makes sense, you think it should seem obvious based on what you've read and what you seem to know about the subject. I do that sometimes anyway, all I'm trying to say is I understand where you're coming from. You don't have to feel like defending yourself here, it's OK to disagree to some extent about the nature of the brain. I personally disagree with your stated position based on what I've read in terms of neurology, specific case studies with savage children, and the role of Wernicke's and Broca's areas of the brain as well as case studies on indivuals who have suffered damage to those areas of the brain and the effect it has on a person's ability to process the processing of language. I don't believe that language is necessary for all humans, it seems like you've partially expanded the definition of language in part of your argument but I wouldn't consider chemical neurotransmitters or hormones to be a language in the same sense that English is a language. The brain operates on electrical discharge and signal transmission through chemical messengers, but that kind of organised transmission of data in the forms of energy and chemical reaction isn't quite the same as having a mind built on a framework of language in a format anything like spoken and written languages. Here's my counter proposition, it seems more reasonable to me that as organisms developed more advanced nervous systems over time they were able to incorporate a wider range of senses, and as a mechanism of defense some organisms developed a capacity to manipulate sound for defensive purposes which over time found a new use in the ability to warn and coordinate group activities. Sound is particularly well suited to such activities because of the way air carries sound around visual barriers and over moderate distances thus allowing creatures to warn and communicate without being easily seen by predators and without requiring a clear line of sight for reception. As our ancestors began to develop tools through more complex processes they began to attempt to communicate the processes and skills they wished to teach through rudimentary pictorial and vocal language, which grew more complex with time. Now this is still merely conjecture but it makes more sense from what I've observed of reality so far, and would if true imply that language is not necessarily innate but is dependent upon the same structures that allowed vocalising organisms to perceive warning in the vocalisations of others. Now you're welcome to continue to believe what you think, I invite you to consider the above possible explanation, but let's neither of us engage in the belief that we know how language works in the brain with unassailable and infallible accuracy, we both know better than to engage in such fantasy. -
I love science, and while I gather that I don't have to be better at math per se, as tough as it is for me, I would really like to be a dope mathematician. I'm pretty good at finding free educational materials including books online, if I'm understanding correctly if I want to brush up on my skills and really get a deeper understanding of mathematics I should revisit the basics first with arithmetic, algebra, and elementary geometry. After the basics I should go for some trigonometry and both integral and differential calculus. For my work with computers I should also revisit discreet mathematics. From there I won't be anywhere close to a master but I'll be in fit college level and I can branch out tackling some of the big theories like number and set theory, some analytical and non-euclidean geometry, some statistics, etc... I just want to get a lot better in math, it's physically and emotionally painful for me to look at it for more than a few minutes but I'm trying to get to the level where I'm really pretty good and knowledgeable in the subject. I know it's kind of silly but it sucks to open up a video on MIT open courseware and the teacher just expects all of the students to already know some mathematics concepts that just leave me wide-eyed and bewildered. I'm trying to develop a growth mindset but when the gravity of my mathematical ignorance strikes me it leaves me feeling stupid, and I know I'll probably never accomplish my dreams but if I could at least be a bit more ballin' in mathematics, that'd be hella dope.
-
I don't know what you're talking about, I mean mathematically any positive integer can be broken down into ones, at least theoretically if you gave me a positive discreet integer and I could live forever I could keep subtracting one from the value of that number regardless of size until it had been reduced to a large collection of ones. I don't know what you're talking about.
-
Does mathematics really exist in nature or not?
sciwiz12 replied to seriously disabled's topic in General Philosophy
There are children who grow up with fully functional brains, mouths, and ears who never learn any language and grow up like animals. Savage children I think they're called. So language isn't necessary because you can be a human person and grow up not knowing nor thinking in language. It's just natural to assume that language is part of the package because we have had it for longer than we can usually remember. -
Excuse me, any whole positive integer.
-
Well, I said ideology because while that's typically not the way the word is used, you could theoretically define it as a study of ideas, but thank you for your feedback. Oh, harsh, "neuroscience for kids"? Was it really that bad? I... I see how you feel... No no, it's OK, I'll just go huddle in this dark corner and cry in shame and disgrace. That said, thanks for the links.
-
It seems to me that they key to an AI intelligence is not that it mirrors the human brain but that it has a good way to think about the world and try to understand it better over time. If a machine understands how the world works it is intelligent, if it can apply that understanding it is also powerful. The goal is to find a good approach for constructing a way of thinking that allows a computer to build and test its understanding of the world, but we have to think about what it is to understand. For instance I may know that 1+1=2 which I can use to answer the question: "what is 1+1?" But there are different levels of understanding. Understanding has both depth and variety. By depth I mean layers of reference. 1 is a number, it is a quantity, an amount, an abstract idea, a concept, a notion. By variety I mean different ways of thinking about the application of an idea. I can use 1 to solve the above problem, I can also use it to refer to the amount of a real world object I can identify, I can break any number apart into 1s, etc... An intelligent machine needs a way to be able to develop new modes of thinking abstractly, but I'm getting ahead of myself. Let's start with something simple. Say I'm given a large file of plain text. Assuming the text all uses legal English grammar and syntax I should be able to Decode it using a few simple rules. I start with array of capital letters and an array of ending punctuation characters as well as a quotation character and a variable for the space character. I may also need some other character variables for later to search for commas etc... But this will do for starters. First I need to be able to build an array or string of characters from the text representing a full sentence. I need iterate through the text and for each character check to see if it's in the ending punctuation array, and if it is store it in a new array with a number to represent its position in the text. I also need to find the quotation marks so I can think about everything between pairs of quotation marks separately. Then I need to find every capital letter in the text and determine whether or not the character just before that not counting the space was an ending punctuation mark. If so I know that whatever follows and comes before the next ending punctuation excluding eclipses and quotations is part of the same sentence, so I can store it all together in an instance of a sentence array. For each sentence I need every collection of characters that come between spaces with excluding characters that are commas, colons, and semicolons. Now I can go through the characters of each word for capitalisation, apostrophe, -ly,-ing, and other clues to help me make educated guesses concerning parts of speech. Then I can apply some basic rules of grammar and syntax to try to get a better sense of a sentences meaning. I can identify nouns, make some educated guesses to try to figure out subject, object, implied subject, etc... I can also output some data concerning my level of certainty of each word if I'm in test mode. Once I have a good sense of a given sentence I can perform a few other tests etc.. For quotation marks and of course consider the use of colons and semi-colons. I know I'm oversimplifying here but so far all I'm doing is using a few rules of grammar applied to a text written especially for my computer mind to read so that I can make some good easy guesses to identify the use of various words and the likelihood of certainty for each word for each sentence. When I'm done I have a new list of words, their parts of speech, and I can use each sentence to possibly begin to understand the relationships that certain things have with each other. For instance given a sentence "A shark swims in the ocean". I may not have a good grasp of what any of those things are yet, but assuming I can accurately identify parts of speech I know that a shark is an object, swims is an action, and ocean is another object upon which a swims operation can be performed by a shark. Now let's take this in with some ideas from earlier, let's say I already have a very basic and shallow conception of the world built in, and included with that is the word "ocean" and some data and methods I can use to help me understand and conceptualise, and think about oceans. The word can now act as a sort of receptor site for me to evolve my understanding of the world. I already knew a bit about oceans, now I know that whatever a shark is, it can perform a swims operation in the ocean. This may not really make sense to me at first but it is a building block I can use. I may need some other text decoding methods so maybe I can figure out tenses or proper nouns, or filter out conjunctions and articles or something, but later on let's say I see that a kid also performs a swims action on whatever a pool is. I can propose two new ideas, a pool is like an ocean and a kid is like a shark. A human grading my ideas can score me highly for suggesting that a pool is like the ocean and poorly for thinking that a kid is like a shark, increasing my association between pools and oceans and decreasing my association counter between kids and sharks. In this way I can use text to learn and form rudimentary ideas. The real test though is when I transition into a phase of reflection where I truly test my understanding by creating a file in English using what I've learned to try to write about the world as I understand it based on my built in constructs modified and expanded by an evolving understanding of new words. So in my mind the main challenge to tackle next is how to set up the base constructs before the program even runs.
-
I wrote another post in the psychology forum attempting to further explore the concept of ideas, and while I'm still waiting for feedback on both this post and the post on ideology, I do have more thoughts to add here. I think that in order to teach a machine to understand words it might be useful to take works of English literature in plain text, have the machine search the text for repeated words, creating a count of each word in the text and the number of words between the most commonly used words. For an idea of what to do next it may be useful to divide the machine mind into primary regions with an interface acting as a gateway between incoming words and parts of the mind used to store those words. Initially all words are stored on the language level until they can be sorted, along with information about potential relationships between words based on frequency within a text, possibly also by attempting to intelligently identify parts of speach and relationships between the same word modified by usage and tense. Then it would go through a comparative phase where it would open up the parts of it's brain to see if the words taken from the text can be found within within the database being accessed. If the word is contained within the computer will check to see if it thinks one of the new words might be related to the existing words. If yes then the new word will be stored with information about its relationship to the existing word. If two or more related new words already existed in the data base it would update the relationship counter to show a stronger relationship. This process will repeat for each primary database, or possibly a primary folder containing data bases for which each data base will be examined. Then the machine will go into evaluation mode where it stores all words not stored into a database into a database of unsorted words for later consideration after it is able to access more literature. In the evaluation phase it also will go through databases making comparative queries to see what words have been stored in multiple data bases, wt which point the word is given a special status as a bridge word that may find use in multiple areas. The bridge words that have connections to many databases, are tested or evaluated to provide a rough prediction concerning the likelihood that it is an overlooked article or conjunction. The literature files in plain text may also have to be stored in accessible folders for reference which means that words may also need to contain some data concerning their sources for future context testing. Words that span many databases in various areas (folders) but seemingly aren't conjunctions, articles, etc... Are put into a general use folder, perhaps with a special function to occasionally notify the programmers of the words in the general databases and possibly query the developers for more contextually related words to help it zero in and identify the word with greater detail. The folders should probably all be general topics with broad headings containing sub folders with slightly less broad headings with maybe one more layer of folder and the data bases, for instance science containing chemistry containing organic chemistry or something. From here we can begin building a framework for the computer to begin using the words by having information hard coded concerning the relationship of words contained within certain databases to itself and the world as well as methods to utilise new words and their relationships to begin constructing ideas about itself, about the world, and about others. Ideally we could then examine these constructed ideas to measure the machines evolution and success over time in correct use of information to orient itself to the world. We could also begin to employ clever algorithms here to have the machine test and evaluate its own ideas and create better methods based on patterns which lead to more frequent successes with higher scores. Of course it's time to stop for now because I'm beginning to get far too vague and need to reflect on where I am so far and where to go from here to get further.
-
I would like to take a moment to talk about approaching an understanding of psychology with a focus on ideas and beliefs. Now being admittedly not a professional on the subject I'm not sure whether the proposed topic is a no brainer or explored and abandoned as an impractical approach due to mechanistic obscurity, but I do study software and lately I have been focusing on artificial intelligence, so for me the notion of an idea is of practical and more immediate concern. So I'm going to begin this thread by freely musing to construct a model and a framework for beginning to think about ideas as a distinct topic of study and allow more highly educated opinions to weigh in on the merit and uniqueness of whatever I come up with. There are a couple immediate concerns, what is the makeup of an idea? What is the relationship between an idea and the physical, neurological functions of the brain? What role do ideas play in the psychology of a human? Can we still maintain a concrete understanding of an idea as we abstract it away from the human brain? Can we define the concept of an idea in such a way that is still accurate to the sense of what an idea is without being so vague that it loses necessary practical utility? There are more concerns to be sure, but I would like to say that these question are a good jumping off point for the time being. I'm going to proceed a little more experientially and intuitively for now and attempt to course correct into objectivity afterward, but of course without the ability to get useful data and feedback concerning neuronal activity so a lot will have to based on my limited prior knowledge and a bit of pseudo-logical guesswork. When I acknowledge something as an idea, it usually starts as a vague feeling or sense, an intuition of sorts, which I then use to query my knowledge of words in a sort of lock and key or best fit fashion in order to approximate the idea verbally, even if I don't intend to express the idea vocally to another conscious observer. So an idea seems to begin as a sense, and I seek words out of a desire to resolve the sense with a concrete and communicable description. Of course I wasn't born with a complete language but it seems reasonable to suggest that the human race created language to make ideas communicable to others, if not also easier for one's self to process. So a new question could be, why do I feel that ideas become easier to process through language if ideas are somewhat original to my nature while language is a foreign construct? If I think of words as references do I think of them as referring to real world objects or as references to ideas which hold the information which a word might be well suited to describe. If the idea refers more directly to information than the words which describe the information then why does my brain seemingly take comfort in the use of words to encapsulate ideas for the sake of consideration? If I think of my brain as a web of interconnected nodes linked through pathways that change according to association, then it could be that each node temporarily or at times permanently holds part of the information or data required to form a more complete piece of information which itself acts as a component of an idea, then I could reason that because the word is so closely related to the information it acts as a doorway to further information stored with the word that might also relate to information being compiled in order to construct an idea. So then I can think of an idea as a compilation of related and associated data and information in a constant state of flux. The brain may perform a sort of searching algorithm the process of which may also serve to strengthen connection as it utilizes them in the quest for relevant data. The signals travel through connected pathways and perhaps through a sort of guess and check operation is gradually directed to more relevant pathways. Under developed pathways would yield less relevant data more often and some process involved in guessing and checking would gradually show preference for more relevant pathways. In this way the brain accesses all of the data needed to simulate and object, describe an object through various related pieces of information, make connections to seemingly related objects, and eventually create levels of abstraction by cross-referencing related objects until a new object is formed in the mind from intersecting information but with no specific objective point of reference. That's what I'm starting with, feel free to school me now.
-
Does mathematics really exist in nature or not?
sciwiz12 replied to seriously disabled's topic in General Philosophy
This one went to some really strange places rather quickly, I think the only definitive answer here is... Depends on how you look at it. If a thing has a property that it is quantifiable, is that proof that numbers exist? What is the property of quantifiability? Something surely is quantifiable if it can be counted, but what can be counted? Let's take the example of a spoon. With even a rudimentary level of human understanding I can discern a difference between one spoon and two spoons, but the problem here is that there aren't really any spoons. If you take the principals of chemistry and modern physics on good faith given the evidence and research available on both subjects then all spoons are really collections of atoms. "Aha!" You might say, "the atoms then are quantifiable, if hard to count." But really there are no atoms, only collections of quarks, discreet packets of energy. Going further and accepting particular well regarded models of the subatomic universe it would seem that particles are in fact excitations of continuous fields that have approximate values at various points in spacetime, or perhaps vibrations of superstring which themselves aren't very discrete. So any property of oneness or twoness observed so far is an emergent property of collections or excitations of stuff. To take it a step further the likelihood that one spoon is exactly identical in mass to any other spoon down to even just the atomic level is absurdly low. So really there aren't likely two be two of a spoon anywhere, because each spoon is more or less unique. So if your question is, "is math a property of the objective universe such that math could exist without the mathematician?" I would argue that it is not more than a useful fiction of the observer, a construct utilized in order to process the world in a more organised fashion. Going further one could even argue that the universe could be an illusion in which case math is definitely not a property of things that don't exist. However if you asked, "does math exist?" I would argue that it does, in fact, exist. If our brains exist then our brains have seemingly simulated consciousness which has itself constructed all of the basic philosophical components to create a sense of math, which then exists as a function of the brains of the people that perform mathematical operations. Somewhere in the array of discharging neurons, in the processes of sending neurotransmitters across synaptic gaps, math exists and is being performed, and in that way it exists in the same way this website exists, as an emergent property of neurons forming the human brain. If our brains don't exist then it is even more difficult to pin down the architecture and mechanics of our thinking, because in that case we clearly do not have easy access to accurate information concerning how we exist, but I would still argue that if I exist, then my thoughts exist as properties of me. So to recap, are numbers a component of objective reality? Probably not, but I'll get back to you when I know everything there is to know about the whole of reality with 100% undeniable certainty. Does math exist? As long as you hold it... In your heart... (Sparkles and magic rainbows) but seriously, yes, as a incidental product of consciousness, not verifiably necessary to consciousness, but in this particular case an apparent effect of certain consciousnesses. I still think it's worth considering the physical argument, though. Some argue that math is real because things in apparent reality seem to behave in definite and easily predicted patterns and the best way we've discovered to understand the apparent patterns in the universe is math. I don't think this makes math not a useful fiction, but the merit of the argument comes from the idea that math is more or less discovered rather than simply created. It's not like the works of Shakespeare because it came about through observation and isn't subject to change by the will of man as far as we can tell so much as by further and deeper observation of the universe. In order to understand this concept it's useful to have a notion of abstraction. Math, and even the physical sciences to a lesser extent, contain ideas that are distinct from imagination because they serve as descriptions of what we have observed through our senses, with math only being slightly more abstracted than chemistry for instance. So it's not pure fiction but I also wouldn't say it exists in the physical universe because while physics demands empirical experimentation for verification (a process of constant reaffirmation of its object existence) math is unique because I can discover new "laws" of mathematics while sitting in my room writing in my private journal without ever testing my ideas in the physical world, and offer up the proof in the same stroke. So assuming the universe is real and science is mostly accurate up to this point, that puts math somewhere between fiction and physical science. The best word I can think of is useful fiction. Math presents with a model of the universe that is demonstrably insufficient for many tasks and at times leads us to identify conjectures which can be described by mathematics which are also unprovable and seemingly unsolvable by mathematics, but the model is so practical we can use it to put a man on the moon and create virtual realities. In other words it is so close to accurately describing so much of reality in such great detail and is so deeply intertwined with the objective and empirical that it might as well be a part of objective reality, even if it maybe technically isn't. -
To which I would largely agree. In saying the concepts have become intertwined I suppose it would make more sense to say that the same left leaning political faction in favor of progressivism and civil rights became largely associated with their academic supporters as colleges tend to breed a style of thinking that favors civil rights but demands knowledge of the sciences as a matter of practice among other core academic pursuits. I think it's as you say, a matter of enforcing tribal boundaries. Still there have been times where conservativism didn't go hand in hand with distrust in academics and sciences, but embraced those along side everyone else as we embraced the promises of nuclear energy, though perhaps my perspective has been skewed by too much fallout 3&4. Anyway it seems that as more prominent scientists and professors spoke out against southern strategy racism, and the Vietnam war, the view of academia and science as strongly left leaning ideals began to emerge. Then again as I said before my idea of what the 50s were like is a bit skewed admittedly.
-
Fortunately I don't waste any money because all of my knowledge of hardware is entirely theoretical. I'm more software oriented which is basically free if you are already paying the electric and cable bills as well as house payments. It's not the spending but the lack of earning that kills me.
-
Anti-intellectualism was predominant in revival movement rejection of hard lined "there definitely is no God, that's silly" empiricism. However there's an argument for a close relationship with both the southern strategy and the opposition to counter culture in terms of increased part centralisation. I should clarify that I can see an argument for the southern strategy as related to anti-intellectualism if you also allow that intellectualism has become intertwined with progressivism and civil rights.
-
Ah, so it is generally more fundamental knowledge and approach that I should be worried about than making sure I know the specific implementations of tools in great detail?
-
Thank you both. Quick query, how do some labs get the higher purity that is more difficult to attain working from home?
-
I'm studying it right now, let's talk about it. I'm still just getting started in my readings really, but I feel at least a little capable of engaging in a dialogue on the subject. Firstly I know enough to know I'm not in the camp that thinks we should be modeling the hardware and software after the human brain. It seems to me that the technology to create highly intelligent evolving computer consciousness is actually closer to being a reality than what would be needed to emulate the human brain. A lot of my computer science knowledge is also heavily tied to game design which is my current major, so when I think of procedural generation I am thinking about procedural content generation techniques using algorithms to create content for games and cross applying the knowledge to the generation of evolving computerised consciousnesses, although I suppose the same techniques exist wholly outside of game design and development. I have two frameworks for thinking about the issue, one is the problem of understanding the meaning behind words and the other is the problem of understanding the self. Essentially what I'm trying to get at is that on some level we as humans have this perception of a ghost within the machine, an observer within us that on the one hand is created by and exists as a construct of the brain and yet seems to our point of view to be somehow removed from the physical. I'm going to start by considering a model of human consciousness within which a man's personality and sense of consciousness are firstly separate and secondly closely related. I then also propose that within this model both are simulations created by the brain to fulfil useful filtering functions. I will also refer to the personality as the identity and the conscious awareness as the mind. In programmatic terms I suspect the identity could best be represented and constructed as a storehouse of information. In other words the identity itself would be represented simply as a collection of information about the characteristics one learns to associate with one's sense of identity. That information is later pulled by various other parts of the brain when making calculations or decisions. For example let's take Jack. Jack is told that he is handsome, perhaps reasons that he is smart, and decides through experience and sensory/perceptual feedback that he likes chicken. Later when Jack hears a compliment he accesses his Center of identity containing information about himself, pulls his sense of self image, recalls that he identifies himself as handsome, compares the compliment to this recovered self knowledge, and decides that the statement was a genuine expression of admiration. By treating the identity as a storehouse of self knowledge, we can model or potentially replicate the way that identity and personality function in human beings. Consciousness is another vital consideration. I think that we can model consciousness partially through its relationship with identity. Let's take seeing for instance. It is difficult to accurately deconstruct one's thoughts and senses so bear in mind I'm not trying to accurately determine how the human brain actually works, merely how we might think about modeling a computer intelligence with similar features. So when I look out through my eyes, I can think of this process as a combination of processes, on the one hand I am capturing visual data about color, shape, depth, etc... On another level however if I choose to focus on it I may become sharply aware that there is a living thing which is me looking through those eyes and perceiving the world that must therefore be in front of where I am. This perspective can potentially help to deepen our conception of this model of conscious thought. Think of this process of focusing as making a query, it is a willful effort to realize who is doing the looking, where the information from my eyes is going. We can begin to think of the simulated identity as a useful construct the brain employs to orient itself in the environment. The self is a construct to which sensory information can be passed for orientation and from which information can be collected to make calculations. I perceive a me looking through the eyes so I can register that the eyes are mine, that the seeing is being performed by the me, and that the things being seen must therefore be in front of the me, so if a dangerous animal is a thing being seen then I can think about the threat that it poses by realizing that it is the me doing the seeing of it which must therefore be in front of me, placing me squarely in the realm of danger. So then other parts of my brain on an update cycle can go into the me object to check whether or not isInDanger is enabled at which point the various parts of the brain doing the checking can spring into action and do what they are programmed to do to resolve the situation while isInDanger is enabled. So to recap, the identity is essentially a box into which information about the world can be placed to orient the self to the world and from which information about the self can be extracted in order to make various other calculations. Of course now we must take a step back and talk about thinking, as I have definitely skipped over knowledge representation. When we as human beings use words, we do so in a capacity above and beyond a simple grammar engine. I'm thinking of course about the Chinese room contention to the Turing test. We must think about how to transcend the web of words defined by other words, because the word only has weight in its reference to a thing, or in more abstract dialogue a word may exist as a reference to an idea, but on some level even some of the most abstract ideas have their roots in things or relationships between things or the states and conditions and natures of things. In some ways we can start to see a picture unfold where a generation of new ideas comes from abstraction from a few base concepts whose interplay and expansion give birth to an ever widening range of possibilities. The key then, in my mind anyway, is to determine what is the smallest set of concepts that can be used to create the primary building blocks for the kind of intelligence that could grow to human and superhuman levels of intelligence? Also what is the most efficient way to represent new ideas so that the consciousness doesn't severely leak memory before it has learned much of anything. We already have some semblance of a self, though not yet very well fleshed out. For an intelligence to properly orient itself I would suggest it also needs to have a concept of world and a concept of other. Without sensory devices (I should say without consideration of sensory devices for the time being) the intelligence needs an interface so that it can begin receiving new information from a specific instance of other and it needs a way to be able to sort the incoming information according to what it already knows. It also needs a way to create new temporary ideas which can be considered for future deletion, can be later so closely related with other ideas that they to an extent are more or less combined to save working memory, can be stored in a separate file and saved to disk, etc... Here's where I think words come in, and rather than using dictionaries I think the machine needs to be able to create a word web, in other words each new string should initially be considered an independent idea, the machine also needs a counter for new instances of an idea so that it can measure the extent of the relationship between ideas based on usage, in this way the machine begins to build a complex web of ideas each having varying degrees of relationship with other ideas. It also needs a function to create new objects as its understanding of inter related ideas grows. Furthermore it needs a function to incorporate new ideas into base concepts. Here is where this conception, if it be worthy, could use the most fleshing out. Once you begin inputting new strings through the interface initially the machine might have to request more information, whether it should think about the string as more of a property name, more of a method name, what the object would be, etc... As it evolves it should be able to reason such things for itself. Furthermore there could be room for some genetic algorithms but I haven't thought of an effective implementation with this conception quite yet. Like I said before, I'm still pretty green when it comes to AI so go easy on me if you don't mind, but I think I'm ready to hear some critical feedback and additional thoughts.
-
I suppose I'm still somewhat new to computer science as I've only just started grasping at a deeper and more fundamental understanding of the principals behind the relationships between various parts of a computer on the software and hardware levels. Here's what kills me though, I want to eventually start using my technology to at least make a little bit of money to get by, but EVERY time I look into the requirements of a money making opportunity there are at least six items on the list I've never even heard of before. I have to imagine that if I'm going through it now almost everybody has had to deal with it at some point. From personal experience, how did you handle such requirements? Did you ignore the stated requirements and show off a worthy portfolio, discovering in the process that the requirements were more of a general guide than requirements? Did you identify the specific requirements of a job listing you wanted, studying and mastering the stated requirements before even bothering to apply? Did you simply learn so much in the pursuit of computer science related degrees or in the process of self study that no requirements seemed foreign to you? I mean it really is a lot to tackle, I still am learning to accept that I'll never have the entire API memorised for C# and I still have to "finish" learning SQL, GIT, LUA, 3D math and GPU tools and coding to name a few and even that doesn't even seem to encompass a fraction of the list? Am I missing something integral to an understanding of how professional software development works? Is this just how it always goes? Is the list never ending and ultimately not as important as it seems on the surface? I mean I still need to go back and continue to study discreet math and algorithms more in depth but I don't even feel like I have the time half the time because there's too much new stuff on my plate to warrant a good solid review of the mathematics... Which is sad because I desperately want to become a better mathematician. It's just heart breaking really, it's like playing non-stop Pokémon for life,"gotta catch 'em all, knowledge!" But it's worse because trying to encode new information into memory is ten times more frustrating and disappointing than watching the pokeballs shake only to see the Pokémon go free. Can anyone relate to my pain or am I just being a big baby?