Linguists study how speech is articulated and perceived, and there are lots of interesting things in the process. What about music? Recently I practiced more violin, and then more piano, but in general it is very interesting to observe my physical movement (such as bow arm movement with violin, wrist movement, coordination, left hand vibrato rate, left hand wrist angle, etc) and adjust my movement based on auditory feedback--when I heard myself playing. There are lots of nuance in it and I try different movements to better adjust my production of sound. Sometimes I watch videos and see how people talk about these things. Isn't this analogous to babies babbling--trying out different tongue positions, articulatory gestures and figure out how to produce the right speech sound?
So in a nutshell, the secret of learning to play a new instrument is much like learning to talk, except that we are probably much an expert at manipulating our tongue and jaw than those fine movement of our hand and arm, plus the coordination with the instrument.
Also, in a nutshell, the masters of music must have the expertise of doing these. During hundreds and thousands of years of musical practice, people also kind of figured out techniques to best achieve a perfect match between the movement and the sound, such as the bow arm on the violin. That's what your violin teachers taught you. But keep in mind that, even though your teacher thought that everything s/he taught you about those movements must be followed strictly, you should know better--they are just one out of many sets of methods that could produce a good sound from a movement, a shortcut. But just because this one movement produces the good sound doesn't mean other movements cannot. There we have the different styles of playing.
All in all, music is not a science (there is not one right answer), but it is also a science
Researchers at Duke looked at the different aspects of unemployment and the risks of heart attacks among 13,451 men and women, ages 51 to 75, who participated in the national Health and Retirement Study. Participants were interviewed every two years from 1992 to 2010.
Using statistical models, the researchers looked at associations between multiple aspects of employment instability and heart attacks. Among the findings presented online Monday in the Archives of Internal Medicine:
Participants had the same risk of a heart attack from unemployment no matter what their education level or socioeconomic situation, Dupre says.
George says the researchers don't know the exact mechanisms for the increased risk, but they do know that "anytime we are not as in control of our lives as we'd like to be, stress goes up."
When that happens, other health habits may slide too — people may eat less healthfully, stay up too late and not sleep as well, she says. There may be more strain and conflicts in the family. "We believe all these things are among the reasons why unemployment is linked to this increased risk of heart attack."
People should "be extra vigilant" about seeking medical help during times of unemployment, especially if they are experiencing any signs of a heart attack, George says.
"The gathering was meant to inspire multidisciplinary enthusiasm for the revival of the scientific question from which the field of artificial intelligence originated: how does intelligence work? How does our brain give rise to our cognitive abilities, and could this ever be implemented in a machine?
Noam Chomsky, speaking in the symposium, wasn't so enthused. Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field's heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the "new AI" -- focused on using statistical learning techniques to better mine and predict data -- is unlikely to yield general principles about the nature of intelligent beings or about cognition.
This critique sparked an elaborate reply to Chomsky from Google's director of research and noted AI researcher, Peter Norvig, who defended the use of statistical models and argued that AI's new methods and definition of progress is not far off from what happens in the other sciences.
Chomsky acknowledged that the statistical approach might have practical value, just as in the example of a useful search engine, and is enabled by the advent of fast computers capable of processing massive data. But as far as a science goes, Chomsky would argue it is inadequate, or more harshly, kind of shallow. We wouldn't have taught the computer much about what the phrase "physicist Sir Isaac Newton" really means, even if we can build a search engine that returns sensible hits to users who type the phrase in."
Speech recognition technology sees its most recent breakthrough at Microsoft Research. This new model builds upon the latest research in speech recognition involving a model called Deep Neural Networks (DNN), replacing the previously predominant Hidden Markov Model(HMM).
"Toward this end, we present a computational model for speech recognition that is inspired by several interrelated strands of research in phonology, acoustic phonetics, speech perception, and neuroscience."
" a hierarchical framework where each layer is designed to capture a set of distinctive feature landmarks. For each feature, a specialized acoustic representation is constructed in which
that feature is easy to detect".
the video demo
article 1(lots of math, more computational)
Acoustic Modeling using Deep Belief Networks
article 2(more accessible for phonetician and generic)
The foundation of the modern science is the level of accuracy and reproducibility of numbers that are often obtained on some fancy technological innovations. When fMRI and other brain imaging (non-invasive) techs came out, some people may consider them to reveal everything about the brain. Only later we discover that how to interpret the results seem to be a more important issue than the numbers themselves. This is true in linguistics too. We used to think praat can tell us many information accurately and we believe those results without doubting them. But when you've used them for a couple of years, you start to wonder: because something seems wrong.
This happened to me lately a lot. First is the discrepancies in numbers we observe from praat scripts that I wrote to get F0 values and those measured manually. There is always a difference. Then, when you do it by hand, when you set the ranges (say 75HZ to 500 HZ) differently (say 50-500), you get different F0 contours! And that affects some positions of the markings. The most ridiculous thing is that when I wrote another script to extract the F1 formant value out of hundreds and thousands of sound samples, the F1 came out (as verified by hand by our post-doc) different from the results obtained by hand! How bizarre! then we discovered two things led to this: one is the formant setting, the other, more importantly, is that when you measure F1 by hand, depend on how much you're zooming in in the viewing window, you get a different F1 every time! Sometimes the difference can be great as 200Hz.
The moral of the story is of course that with any kind of machine, we can't simply take the numbers but we have to be more sophisticated about interpreting them, and the only thing you can do is to keep it consistent and be careful in your control, because there is no right or wrong in these differences (the different values are obtained because of the different computations going on in different methods, which really is not easy to say which one is more correct). In a word, you can take it as the truth just because a computer says so.