In a new paper, the researchers describe a mathematical model they created that helps predict pragmatic reasoning and may ultimately result in the manufacture of machines that can better comprehend inference, situation and social rules. To determine what somebody means, you need situation. Consider the term, ‘Man on first ‘. It will not make much sense unless you are at a baseball game. Or imagine a sign outside a children’s boutique that reads, ‘Baby sale — One week only ‘! You simply infer from the situation that the shop is not selling children but advertising bargains on equipment for them. Present these widely cited scenarios to a computer, nevertheless, and there may likely be a communication breakdown. Computers are not great at pragmatics — how language is used in social situations.
But a pair of Stanford researchers has taken the first steps toward changing that. In a new paper published recently in the journal Science, Assistant Professors Michael Frank and Noah Goodman identify a quantitative theory of pragmatics that promises to help open the door to more human-like pcs, people that use language as flexibly as we do. The statistical model they developed helps predict pragmatic reasoning and may ultimately lead to the manufacture of machines that can better comprehend inference, context and social policies. The work might help researchers understand language better and treat individuals with language problems. It also might make speaking to an electronic customer care worker a little less frustrating. ‘If you’ve ever called a flight, you know the computer voice identifies words nonetheless it doesn’t necessarily understand what you mean,’ Frank said. ‘That is the key feature of human language. In some sense it is all about what the other person is attempting to inform you, not what they are really saying ‘.
Frank and Goodman’s work is section of a wider tendency to make an effort to understand language using statistical tools. That pattern has generated systems like Siri, the iPhone’s speech recognition personal assistant. But turning language and speech into numbers has its obstacles, primarily the problem of formalizing ideas such as ‘popular understanding’ or ‘informativeness ‘. That’s what Frank and Goodman wanted to address. The scientists enrolled 745 individuals to take part in an online research. The members found a pair of objects and were asked to bet which one was being described with a specific word. For example, one band of participants found a blue circle, a blue square and a red square. The question for that class was: Imagine you are conversing with somebody and you want to make reference to the middle subject. Which word can you use, ‘blue’ or ‘circle ‘? The other group was asked: Imagine somebody is talking to you and uses the word ‘blue’ to reference one of these brilliant things. Which subject are they referring to? ‘We made how an audience understands a speaker and how a speaker decides what to say,’ Goodman described.
The benefits allowed Goodman and Frank to develop a mathematical formula to predict human behavior and determine the odds of talking about a particular thing. ‘Before, you can put them into a computer and not take these informal theories of linguistics. Now we’re getting to be able to do that,’ Goodman said. The scientists are actually applying the product to studies on hyperbole, sarcasm and other areas of language. ‘It will take years of work but the desire is of a computer that actually is thinking about what you mean and everything you need rather than exactly what you said,’ Frank said.In a new paper, the researchers describe a mathematical model they created that helps predict pragmatic reasoning and may ultimately result in the manufacture of machines that can better comprehend inference, situation and social rules. To determine what somebody means, you need situation. Consider the term, ‘Man on first ‘. It will not make much sense unless you are at a baseball game. Or imagine a sign outside a children’s boutique that reads, ‘Baby sale — One week only ‘! You simply infer from the situation that the shop is not selling children but advertising bargains on equipment for them. Present these widely cited scenarios to a computer, nevertheless, and there may likely be a communication breakdown. Computers are not great at pragmatics — how language is used in social situations.
vuidea.blogspot.com
But a pair of Stanford researchers has taken the first steps toward changing that. In a new paper published recently in the journal Science, Assistant Professors Michael Frank and Noah Goodman identify a quantitative theory of pragmatics that promises to help open the door to more human-like pcs, people that use language as flexibly as we do. The statistical model they developed helps predict pragmatic reasoning and may ultimately lead to the manufacture of machines that can better comprehend inference, context and social policies. The work might help researchers understand language better and treat individuals with language problems. It also might make speaking to an electronic customer care worker a little less frustrating. ‘If you’ve ever called a flight, you know the computer voice identifies words nonetheless it doesn’t necessarily understand what you mean,’ Frank said. ‘That is the key feature of human language. In some sense it is all about what the other person is attempting to inform you, not what they are really saying ‘.
Frank and Goodman’s work is section of a wider tendency to make an effort to understand language using statistical tools. That pattern has generated systems like Siri, the iPhone’s speech recognition personal assistant. But turning language and speech into numbers has its obstacles, primarily the problem of formalizing ideas such as ‘popular understanding’ or ‘informativeness ‘. That’s what Frank and Goodman wanted to address. The scientists enrolled 745 individuals to take part in an online research. The members found a pair of objects and were asked to bet which one was being described with a specific word. For example, one band of participants found a blue circle, a blue square and a red square. The question for that class was: Imagine you are conversing with somebody and you want to make reference to the middle subject. Which word can you use, ‘blue’ or ‘circle ‘? The other group was asked: Imagine somebody is talking to you and uses the word ‘blue’ to reference one of these brilliant things. Which subject are they referring to? ‘We made how an audience understands a speaker and how a speaker decides what to say,’ Goodman described.
The benefits allowed Goodman and Frank to develop a mathematical formula to predict human behavior and determine the odds of talking about a particular thing. ‘Before, you can put them into a computer and not take these informal theories of linguistics. Now we’re getting to be able to do that,’ Goodman said. The scientists are actually applying the product to studies on hyperbole, sarcasm and other areas of language. ‘It will take years of work but the desire is of a computer that actually is thinking about what you mean and everything you need rather than exactly what you said,’ Frank said.
But a pair of Stanford researchers has taken the first steps toward changing that. In a new paper published recently in the journal Science, Assistant Professors Michael Frank and Noah Goodman identify a quantitative theory of pragmatics that promises to help open the door to more human-like pcs, people that use language as flexibly as we do. The statistical model they developed helps predict pragmatic reasoning and may ultimately lead to the manufacture of machines that can better comprehend inference, context and social policies. The work might help researchers understand language better and treat individuals with language problems. It also might make speaking to an electronic customer care worker a little less frustrating. ‘If you’ve ever called a flight, you know the computer voice identifies words nonetheless it doesn’t necessarily understand what you mean,’ Frank said. ‘That is the key feature of human language. In some sense it is all about what the other person is attempting to inform you, not what they are really saying ‘.
Frank and Goodman’s work is section of a wider tendency to make an effort to understand language using statistical tools. That pattern has generated systems like Siri, the iPhone’s speech recognition personal assistant. But turning language and speech into numbers has its obstacles, primarily the problem of formalizing ideas such as ‘popular understanding’ or ‘informativeness ‘. That’s what Frank and Goodman wanted to address. The scientists enrolled 745 individuals to take part in an online research. The members found a pair of objects and were asked to bet which one was being described with a specific word. For example, one band of participants found a blue circle, a blue square and a red square. The question for that class was: Imagine you are conversing with somebody and you want to make reference to the middle subject. Which word can you use, ‘blue’ or ‘circle ‘? The other group was asked: Imagine somebody is talking to you and uses the word ‘blue’ to reference one of these brilliant things. Which subject are they referring to? ‘We made how an audience understands a speaker and how a speaker decides what to say,’ Goodman described.
The benefits allowed Goodman and Frank to develop a mathematical formula to predict human behavior and determine the odds of talking about a particular thing. ‘Before, you can put them into a computer and not take these informal theories of linguistics. Now we’re getting to be able to do that,’ Goodman said. The scientists are actually applying the product to studies on hyperbole, sarcasm and other areas of language. ‘It will take years of work but the desire is of a computer that actually is thinking about what you mean and everything you need rather than exactly what you said,’ Frank said.In a new paper, the researchers describe a mathematical model they created that helps predict pragmatic reasoning and may ultimately result in the manufacture of machines that can better comprehend inference, situation and social rules. To determine what somebody means, you need situation. Consider the term, ‘Man on first ‘. It will not make much sense unless you are at a baseball game. Or imagine a sign outside a children’s boutique that reads, ‘Baby sale — One week only ‘! You simply infer from the situation that the shop is not selling children but advertising bargains on equipment for them. Present these widely cited scenarios to a computer, nevertheless, and there may likely be a communication breakdown. Computers are not great at pragmatics — how language is used in social situations.
vuidea.blogspot.com
But a pair of Stanford researchers has taken the first steps toward changing that. In a new paper published recently in the journal Science, Assistant Professors Michael Frank and Noah Goodman identify a quantitative theory of pragmatics that promises to help open the door to more human-like pcs, people that use language as flexibly as we do. The statistical model they developed helps predict pragmatic reasoning and may ultimately lead to the manufacture of machines that can better comprehend inference, context and social policies. The work might help researchers understand language better and treat individuals with language problems. It also might make speaking to an electronic customer care worker a little less frustrating. ‘If you’ve ever called a flight, you know the computer voice identifies words nonetheless it doesn’t necessarily understand what you mean,’ Frank said. ‘That is the key feature of human language. In some sense it is all about what the other person is attempting to inform you, not what they are really saying ‘.
Frank and Goodman’s work is section of a wider tendency to make an effort to understand language using statistical tools. That pattern has generated systems like Siri, the iPhone’s speech recognition personal assistant. But turning language and speech into numbers has its obstacles, primarily the problem of formalizing ideas such as ‘popular understanding’ or ‘informativeness ‘. That’s what Frank and Goodman wanted to address. The scientists enrolled 745 individuals to take part in an online research. The members found a pair of objects and were asked to bet which one was being described with a specific word. For example, one band of participants found a blue circle, a blue square and a red square. The question for that class was: Imagine you are conversing with somebody and you want to make reference to the middle subject. Which word can you use, ‘blue’ or ‘circle ‘? The other group was asked: Imagine somebody is talking to you and uses the word ‘blue’ to reference one of these brilliant things. Which subject are they referring to? ‘We made how an audience understands a speaker and how a speaker decides what to say,’ Goodman described.
The benefits allowed Goodman and Frank to develop a mathematical formula to predict human behavior and determine the odds of talking about a particular thing. ‘Before, you can put them into a computer and not take these informal theories of linguistics. Now we’re getting to be able to do that,’ Goodman said. The scientists are actually applying the product to studies on hyperbole, sarcasm and other areas of language. ‘It will take years of work but the desire is of a computer that actually is thinking about what you mean and everything you need rather than exactly what you said,’ Frank said.
No comments:
Post a Comment