Last Thursday(Feb. 14), the non-profit research company OpenAI released a new model of language to be able to produce a convincing passage of prose. So ConvincingIn fact, researchers have not open source, hoping to delay it a potential weapons role as a means of quality maternity news.
While impressive results are a significant leap beyond the prior of the language model has to achieve, the technology involved is not completely new. On the contrary, this breakthrough is driven primarily by feeding the algorithm more training data of a move, is also responsible for most of the other recent progress in teaching the AI to read and write. “It was kind of Strange the human aspect of what you can do(…)more data and bigger models,”said Percy Liang, a computer science Professor at Stanford University.
The paragraphs of the text, the model generated good enough to disguise what people are writing. However, this capability should not be confused with a real understanding of the language of the ultimate goal of the child of AI called natural language processing(BFP). (Where there is a simulation computer vision: an algorithm that can synthesize very realistic images, without any real visual understanding.) In fact, to get the machine to this level of understanding is the task has largely failed to achieve learning researchers. This goal may require years or even decades, to achieve, to guess the beam, and is likely to involve technology does not yet exist.
#1. The allocation of righteousness
The language of philosophy. Words derive meaning how to use them. For example, the word”cat”and”dog”is in the relevant sense, because they use more or less the same way. You can pet a cat, you feed and pet the dog. You cannot, however, feed and pet an orange.
How it is converted to natural language. Algorithm on the basis of the allocation of righteousness has been largely responsible for A recent breakthrough in mathematics. They use the computer learning process of the text, looking for patterns basically by counting how frequent and how close the word is used for another. The obtained model can use these patterns to construct complete sentences or paragraphs, and force things like auto-complete or other predictive text systems. In recent years, some researchers have also begun to try to use to view the distribution of a random sequence of characters, rather than words, then the model can be more flexible processing of acronyms, punctuation, language, and other things that do not appear in the dictionary, and the language, there is no clear dividing line between the words.
The pros and cons. These algorithms and scalable, because they can be applied in any situation and the learning never tag data.
Disadvantages. The models they produce do not actually understand the sentences they constructed. At the end of the day, they are writing essays using the text of the Association.
#2. Framework of semantic
The language of philosophy. Language is used to describe actions and activities, such sentences may be subdivided as the theme, the verb, the modified—Who, What, Wherethat When.
How it is converted to natural language. Algorithm based on the framework of the language using a set of rules or a large amount of labeled training data, in order to understand the deconstruction of the sentence. This makes them particularly adept at the analysis of simple commands, and therefore useful to a chat robot, or voice assistant. If you ask Alexa”find a restaurant with four stars for tomorrow”, for example, such an algorithm is to find out how to execute the sentence by breaking it down to action(“discover”), the What (“The restaurant has four stars”), the When (“Tomorrow”).
The pros and cons. The different distribution of the semantic algorithm, it will not understand the text, they learn the framework of the semantic algorithm can distinguish different information in a sentence. These can be used to answer questions like”when is this event taking place?”
Disadvantages. These algorithms can handle only very simple sentences, and therefore cannot capture the nuances. Because they need a lot of specific training, they are also inflexible.
#3. The model of semantic theory
The language of philosophy. Language is for the exchange of human knowledge.
How it is converted to natural language. Model theoretical semantics is based on the old idea in AI, all human knowledge can be encoded, or ModelingIn a series of logical rules. So if you know that birds can fly,and Eagles are birds, then it can be inferred that the Eagle can fly. This practice is no longer popular, because the researchers soon realized that there are too many exceptions to each rule(for example, Penguins are birds, but cannot fly). However, the algorithm according to the model theoretical semantics is still useful for extracting the information model of knowledge, such as databases. Framework as the semantics of the algorithm, they analyze the sentence deconstruction into parts. However, while the framework of the semantic definition of those portions of the Who, What, Wherethat WhenModel theoretical semantics defined in their logic rules encoding knowledge. For example, consider the question”What is Europe’s largest city population?” Model theory the algorithm will break it down into a series of self-contained asked:”what is all cities in the world?” “Which is in Europe?” “What is city population?” “Which population is the greatest?” Then, it will be able to through the patterns of knowledge to give you the final answer.
The pros and cons. These algorithms give machines the ability to back and forth answer complex and subtle questions.
Disadvantages. They need to model knowledge, which is time consuming construction, rather than flexible in different situations.
#4. The ground semantics
The language of philosophy. Language to meaning from life experience. In other words, humans created language to achieve their goals, so it must be understood that within the scope of our goal-oriented world.
How it is converted to natural language. This is the latest method and a good considered to have the greatest promise. It tries to mimic how humans provoked by the language in their lives: the machine begins in a blank state, and learn the Associated words for the correct meanings, through dialogue and interaction. In a simple example,if you want to teach the computer how to move objects around in a virtual world, you will give it the command”move the red block to the left”, then it shows what you mean. Over time, the opportunity to learn and perform the commands without help.
The pros and cons. In theory, these algorithms should be very flexible, and get the closest to a real understanding of the language.
Disadvantages. Teaching is very time-consuming—and not all of the words and phrases is for ease of explanation as”move the red block.”
In the short term, good view, the field of natural language will see more progress, the use of the prior art, in particular those according to the assigned semantics. However, in the longer term, he believes, with all their limitations. “There is a qualitative gap between the way that humans understand the language and perception of the world and our current model,”he said. Narrowing this gap will probably need a new way of thinking, he added, as well as more time.
This originally appeared in our AI communications algorithms. It is sent directly to your email Inbox, sign up here for free.