Blogs

New software by California-based company Emotient can analyze a human face and pick up on subtle facial signs and even microexpressions or little flickers of emotion to tell if a person is feeling joy, sadness, surprise, anger, fear, disgust, contempt or any combination of those seven emotions.

Credit: Emotient
If someone is described as "smiling, but not with their eyes," that person is likely faking the smile. But what does that mean, exactly? And how can one tell a real grin from a fake one?

New software by California-based company Emotient can do just that. Using a simple digital camera, Emotient's software can analyze a human face and determine whether that person is feeling joy, sadness, surprise, anger, fear, disgust, contempt or any combination of those seven emotions.

"There's often a disconnect between what people say and what people do and what people think," said Marian Bartlett, co-founder and lead scientist at Emotient. The company's software, called Facet, can reconnect those dots by accurately reading the emotions registering on a person's face in a single photograph or video frame. All it needs is a resolution of at least 40 by 40 pixels. [Smile Secrets: 5 Things Your Grin Reveals About You]

Using Facet on a video sequence produces even more interesting results, because the software can track the fluctuations and strengths of emotion over time, and even capture "microexpressions," or little flickers of emotion that pass over people's faces before they can control themselves or are even aware they've registered an emotion.

"Even when somebody wants to keep a neutral face, you get microexpressions," Bartlett told LiveScience. That's because the human body has two different motor systems for controlling facial muscles, and the spontaneous one is slightly faster than the primary motor system, which controls deliberate motion. That means microexpressions will slip out — and as long as they appear on a single frame of video footage, Facet can recognize them, the company says.

The software can also pick up on other subtle facial signs that a human might miss. In the case of "smiling, but not with your eyes," Bartlett explained that when people smile sincerely, a muscle called the orbicularis oculi, also known as the "crow's feet muscle," contracts, creating the wrinkles at the eyes' edge that are sometimes called "crow's feet." In these types of cases, the software does need images clearer than 40 pixels, but the required resolution is still within a common webcam's capabilities.

Medical applications

So, what are some uses for software that can identify human emotions based on facial expressions? Facet's applications are incredibly far-reaching, from treating children with autism to play-testing video games.

Recognizing other people's emotions based on their facial expressions is a challenge for many people who have an autism spectrum disorder, particularly children. As a research professor at the University of California, San Diego's Machine Perception Lab, Bartlett has been studying the use of facial-recognition software to help people with autism for several years. [5 Controversial Mental Health Treatments]

Using an earlier version of Facet's software, for example, Bartlett and her colleagues created a game in which players are asked to mimic the facial expressions of a cartoonish character on the screen. Using Emotient's software, the game assesses the player's success in recreating that expression, and returns a score.

This game helps children with autism recognize other people's emotions through their facial expressions, as well as teaches them how to make facial expressions that express their own feelings.

"Facial-expression recognition and facial-movement recognition are very closely intertwined," said Bartlett. "That's the way the brain works. It's not that you have one brain that does the recognition and one that does movement; it's a network that feeds on itself."

Building that facial muscle memory also helps players recognize their own emotions, and creates a sort of emotional memory, Bartlett added.

"If you move your face into a happy configuration, you tend to feel happier," Bartlett said. "People can manipulate this by putting a pencil in their mouth: It makes you smile, and you get autonomous nerve system memory. It helps with empathy: You see it, you do it and you feel it, and it helps jump-start the whole social empathy system."

Other types of games could benefit from emotion recognition as well. Imagine a pet simulator, in which a virtual dog or cat reacts to the player's expressions: is happy when that player is happy, sad when the player is sad, upset when the player is angry.

Even if it's not built into the game itself, emotion recognition could help designers play-test their games. If players are getting too angry at a certain level of the game, the designers might want to make that part easier. If players feel confusion — an emotion recently added to Facet's detection capabilities — the designers might need to add a tutorial.

However, there is one possible use Emotient has no interest in exploring. "We're staying away from deception applications," said Bartlett.

It's easy to speculate on how software that can recognize the emotions behind human facial expressions, and even register fleeting microexpressions, could be used for covert purposes.

Instead, Bartlett said Emotient is now working on, among other things, the software's potential for identifying and treating depression. It's back to the idea of "smiling, but not with your eyes" idea: Depressed people often smile without activating their "crow's feet" muscle, not because they're trying to hide something, but because they have difficulty feeling any emotion at all.

Depression is often difficult to diagnose, and even more difficult to treat, but with Facet, doctors could make more accurate depression diagnoses, and also determine whether their patients are responding well to their medication, Bartlett said.

By Jillian Scharr, Staff Writer | www.livescience.com


   Read more...

Charles Q. Choi, Live Science Contributor
3 June 2016
Artificial intelligence may one day embrace the meaning of the expression "A picture is worth a thousand words," as scientists are now teaching programs to describe images as humans would.

Someday, computers may even be able to explain what is happening in videos just as people can, the researchers said in a new study.

Computers have grown increasingly better at recognizing faces and other items within images. Recently, these advances have led to image captioning tools that generate literal descriptions of images. [Super-Intelligent Machines: 7 Robotic Futures]

Now, scientists at Microsoft Research and their colleagues are developing a system that can automatically describe a series of images in much the same way a person would by telling a story. The aim is not just to explain what items are in the picture, but also what appears to be happening and how it might potentially make a person feel, the researchers said. For instance, if a person is shown a picture of a man in a tuxedo and a woman in a long, white dress, instead of saying, "This is a bride and groom," he or she might say, "My friends got married. They look really happy; it was a beautiful wedding."

The researchers are trying to give artificial intelligence those same storytelling capabilities.

"The goal is to help give AIs more human-like intelligence, to help it understand things on a more abstract level — what it means to be fun or creepy or weird or interesting," said study senior author Margaret Mitchell, a computer scientist at Microsoft Research. "People have passed down stories for eons, using them to convey our morals and strategies and wisdom. With our focus on storytelling, we hope to help AIs understand human concepts in a way that is very safe and beneficial for mankind, rather than teaching it how to beat mankind."

Telling a story

To build a visual storytelling system, the researchers used deep neural networks, computer systems that learn by example — for instance, learning how to identify cats in photos by analyzing thousands of examples of cat images. The system the researchers devised was similar to those used for automated language translation, but instead of teaching the system to translate from one language to another, the scientists trained it to translate images into sentences.

The researchers used Amazon's Mechanical Turk, a crowdsourcing marketplace, to hire workers to write sentences describing scenes consisting of five or more photos. In total, the workers described more than 65,000 photos for the computer system. These workers' descriptions could vary, so the scientists preferred to have the system learn from accounts of scenes that were similar to other accounts of those scenes. [History of A.I.: Artificial Intelligence (Infographic)]

Then, the scientists fed their system more than 8,100 new images to examine what stories it generated. For instance, while an image captioning program might take five images and say, "This is a picture of a family; this is a picture of a cake; this is a picture of a dog; this is a picture of a beach," the storytelling program might take those same images and say, "The family got together for a cookout; they had a lot of delicious food; the dog was happy to be there; they had a great time on the beach; they even had a swim in the water."

One challenge the researchers faced was how to evaluate how effective the system was at generating stories. The best and most reliable way to evaluate story quality is human judgment, but the computer generated thousands of stories that would take people a lot of time and effort to examine.

Instead, the scientists tried automated methods for evaluating story quality, to quickly assess computer performance. In their tests, they focused on one automated method with assessments that most closely matched human judgment. They found that this automated method rated the computer storyteller as performing about as well as human storytellers.

Everything is awesome

Still, the computerized storyteller needs a lot more tinkering. "The automated evaluation is saying that it's doing as good or better than humans, but if you actually look at what's generated, it's much worse than humans," Mitchell told Live Science. "There's a lot the automated evaluation metrics aren't capturing, and there needs to be a lot more work on them. This work is a solid start, but it's just the beginning."


For instance, the system "will occasionally 'hallucinate' visual objects that are not there," Mitchell said. "It's learning all sorts of words but may not have a clear way of distinguishing between them. So it may think a word means something that it doesn't, and so [it will] say that something is in an image when it is not."

In addition, the computerized storyteller needs a lot of work in determining how specific or generalized its stories should be. For example, during the initial tests, "it just said everything was awesome all the time — 'all the people had a great time; everybody had an awesome time; it was a great day,'" Mitchell said. "Now maybe that's true, but we also want the system to focus on what's salient."

In the future, computerized storytelling could help people automatically generate tales for slideshows of images they upload to social media, Mitchell said. "You'd help people share their experiences while reducing nitty-gritty work that some people find quite tedious," she said. Computerized storytelling "can also help people who are visually impaired, to open up images for people who can't see them."

If AI ever learns to tell stories based on sequences of images, "that's a stepping stone toward doing the same for video," Mitchell said. "That could help provide interesting applications. For instance, for security cameras, you might just want a summary of anything noteworthy, or you could automatically live tweet events," she said.

The scientists will detail their findings this month in San Diego at the annual meeting of the North American Chapter of the Association for Computational Linguistics.

Original article on Live Science.

Editor's Recommendations

Photo Future: 7 High-Tech Ways to Share Images
The 6 Strangest Robots Ever Created
10 Technologies That Will Transform Your Life
Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


   Read more...

Here are eight suggestions on how to study accounting:

  1. Stay current. Don't skip class, don't skip an assignment. Not only are accounting concepts related, they build on earlier concepts. Postponing and cramming is the wrong way to study accounting. Do not delay gaining a true understanding of accounting concepts, homework assignments, exam questions, etc.
  2. Understanding beats memorizing. Accounting concepts are related. For example, if you have a true understanding of the accounting equation, debits and credits, and the matching principle, you will see the logic of adjusting entries. That understanding will eliminate the need to memorize a lot of details concerning adjusting entries.
  3. Strive to understand WHY. As you are learning accounting, make sure that you know the WHY of each concept and principle. When doing your assignments, ask yourself "WHY is my work correct or incorrect?" If you don't understand WHY, read the free AccountingCoach.com explanation that pertains to the topic. Having a second presentation may give you the insight you need.

  4. Test your understanding. After briefly reviewing your lecture notes, try our free quiz questions with answers. We provide questions for each accounting topic so you can quickly identify what you know and what you don't know. (Studies have shown that answering quiz questions will improve the retention of information.) 

  5. Communicate with your instructor. If you do not understand the WHY of a concept or a solution, ask your instructor for assistance. There are several benefits associated with this: companies that recruit students are looking for communication skills, you will likely need a faculty reference sometime in the future, company recruiters might ask your instructor about you, etc.
  6. Realize that you are learning for your future. If you need more motivation, keep in mind that you are learning accounting for more than your next accounting exam. You are learning accounting to be successful in your first job, your career, your own business, etc.
  7. Review and prepare with our practice exams. To deepen your understanding and to improve your ability to recall accounting concepts, we provide 1,700+ practice exam questions (with answers). These are a great tool for preparing for your final exam or job interview. All of the questions were created by a CPA with more than 25 years of teaching experience. 
  8. If you struggle with the basics, we have visual tutorials. If you need more assistance, we offer eight Visual Tutorials for the following accounting and bookkeeping topics: debits and credits, accounting equation, adjusting entries, introduction to financial statements, balance sheet, income statement, cash flow statement, and bank reconciliation. Each tutorial presents the important concepts in a methodical, step-by-step manner and concludes with an interactive quiz to give you immediate feedback.

Source: accountingcoach.com


   Read more...