I am currently looking to hire a Spring 2019 Natural Language Processing co-op/intern to work with me on the NLP team in Applied Research at Bose. This will be a great opportunity to gain industry research experiences while working fully embedded in the NLP team for 6 months. Please see the job description below.
(The official job description will be posted on Bose Careers website shortly)
Natural Language Processing Co-op/Intern Spring 2019
At Bose, we’ve spent more than 50 years finding new ways to bring pioneering audio products to millions of people in their homes, cars, planes, and just about anywhere else that there is a possibility to enjoy music. We believe that to succeed for the next 50 years we must drive innovation by researching product concepts in audio and beyond that deliver on our human-centered brand promise to help people be more, feel more, and do more.
The Contextual Language Processing team in Applied Research at Bose is looking for a passionate NLP co-op to work at the intersection of Artificial Intelligence and User Experience. The duration of this position is 6 months starting January 2019 (full-time 40 hours/week).
What you can expect:
- Work as part of a passionate and collaborative team of NLP engineers
- Make an impact on real-world Bose products
- Gain hands-on experience in software engineering, cloud computing, and applied NLP/ML
- Contribute to our vision of the future of AI- and Cloud-enabled Bose products
What you’ll do:
- Work on a focused NLP research project with a mentor in areas such as spoken dialogue systems and knowledge engineering
- Work on a variety of applied NLP problems such as natural language understanding, text classification, NER, summarization, etc.
- Implement cloud micro-services that serve data processing, feature extraction, and natural language understanding use cases
- Research, implement, and evaluate a variety of published approaches and algorithms for NLP problems
- Create innovative solutions to NLP problems by applying linguistically informed, statistical and machine learning based methods
What we’re looking for:
- Strong programming background with 2+ years of experience in Python and/or Java
- Creative and collaborative mindset
- Undergraduate or graduate computer science, computational linguistics or related majors with experiences building NLP technologies
- Strong background in Linear Algebra, Probability, and Statistics
- (Desired) Experience working with Spoken Dialogue Systems
- (Desired) Experience with Ontology Management and Knowledge-Based Reasoning
Bose Corporation is a privately held company specializing in audio and other consumer electronics products. Massachusetts Institute of Technology (MIT) is the majority owner of Bose.
Bose is an Equal Opportunity Employer that is committed to inclusion and diversity. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, genetic information, national origin, age, disability, veteran status or any other legally protected characteristics.
Contact: Please send your resume directly to Shuo Zhang at email@example.com .
2017 MASC-SLL Call for Papers
Mid-Atlantic Student Colloquium on Speech, Language and Learning
May 6th, 2017 George Washington University, Washington, DC
The sixth annual Mid-Atlantic Student Colloquium on Speech, Language and Learning will be hosted at the George Washington University (Washington, DC) on May 6th, 2017. The Student Colloquium is intended to bring together students studying computational approaches to speech, language, and learning, so that they can introduce their research to the local student community, give and receive feedback, and engage each other in collaborative discussions.
Students are encouraged to submit 1-page abstracts describing ongoing, planned, or completed research projects, including previously published results as well as negative results. Student research in any field applying computational methods to any aspect of human language (speech, text, other modalities, linguistics, neuroscience, information science, and related fields) is welcome. Submissions and presentations must be made by students or postdocs. Accepted submissions will be presented as either posters or oral talks. See http://www.mascsll.org/2017/call-for-papers for abstract submission details. The submission deadline is March 31.
There will be no registration fee for the colloquium. Students and postdocs are encouraged to consult with their supervisors about potential reimbursement of travel expenses.
TopicsRelevant topics include, but are not limited to:
The sixth Mid-Atlantic Student Colloquium on Speech, Language and Learning (http://www.mascsll.org/) is a one day event bringing together students, postdocs, faculty and researchers from universities in the Mid-Atlantic area doing research on speech, language or machine learning. The colloquium is an opportunity for students and postdocs to present preliminary or completed work and to network with other researchers working in related fields.
This year the event will be held at the George Washington University, Washington, DC and held on Saturday, May 6th, 2017. The colloquium will run from around 9:30am to 5:00pm. There will be no registration charge and lunch and refreshments will be provided.
Students and postdocs should submit one page abstracts describing ongoing, planned, or completed research on computational methods applied to any aspect of human language. Appropriate abstracts will be selected for presentation during a one of several poster sessions of for a short oral presentation.
The goal of the event is to network, share the research you are doing, and learn about what others are working on. We do not require that your work be unpublished in order to be presented. We strongly encourage you to submit anything from work-in-progress to work that has been previously published and presented at a conference. Everything is welcome!
The exact program is still being determined, but usually include one or more invited talks, a panel session, poster sessions, and breakout/discussion sessions.
This is an ACL registered event I am helping to organize.
Check out the same event from last year:
Submission deadline (abstracts): March 31
Decisions announced: April 10
Registration opens: April 10
Program schedule released: April 15
Registration closes: May 1
Colloquium: May 6
Mark Dredze gave a talk at GU on using topic modeling to study social media (twitter) related to gun issues in the US. The topic modeling technique is interesting and worth looking into, called collective supervised topic modeling, look at this paper.
I have been working with Symbolic Aggregate Representation, a symbolic representation of time-series with a MINDIST distance measure defined that lower bounds the Euclidean distance. The original code was implemented in MatLab (Jessica Lin, Li Wei) and I have been using a module called saxpy for Python. While saxpy is nice, it is also very slow when comes to mining large data set. Therefore I have re-implemented saxpy (I call it saxpyFast) to facilitate faster computation of SAX and MINDIST by using integer array as the internal symbolic representation instead of strings, as well as MatLab-like compact matrix operations in numpy. This speeds up the computation abour ~4 times. See my implementation and demo cases here, and a discussion with the original author of saxpy here.
Prof Hal Daume of UMD visits GUCL to give a talk at the CS department, and we had a good meeting with him talking about our works afterwards.
CS COLLOQUIUM: PROF. HAL DAUME III (UMD) FRIDAY, OCTOBER 14 AT 11:00AM TO 12:00PM STM, 326
Learning Language through Interaction
Machine learning-based natural language processing systems are amazingly effective, when plentiful labeled training data exists for the task/domain of interest. Unfortunately, for broad coverage (both in task and domain) language understanding, we're unlikely to ever have sufficient labeled data, and systems must find some other way to learn. I'll describe a novel algorithm for learning from interactions, and several problems of interest, most notably machine simultaneous interpretation (translation while someone is still speaking).
This is all joint work with some amazing (former) students He He, Alvin Grissom II, John Morgan, Mohit Iyyer, Sudha Rao and Leonardo Claudino, as well as colleagues Jordan Boyd-Graber, Kai-Wei Chang, John Langford, Akshay Krishnamurthy, Alekh Agarwal, Stéphane Ross, Alina Beygelzimer and Paul Mineiro.
Bio: Hal Daumé III is an associate professor in Computer Science at the University of Maryland, College Park. He holds joint appointments in UMIACS and Linguistics. He was previously an assistant professor in the School of Computing at the University of Utah. His primary research interest is in developing new learning algorithms for prototypical problems that arise in the context of language processing and artificial intelligence. This includes topics like structured prediction, domain adaptation and unsupervised learning; as well as multilingual modeling and affect analysis. He associates himself most with conferences like ACL, ICML, NIPS and EMNLP. He earned his PhD at the University of Southern California with a thesis on structured prediction for language (his advisor was Daniel Marcu). He spent the summer of 2003 working with Eric Brill in the machine learning and applied statistics group at Microsoft Research. Prior to that, he studied math (mostly logic) at Carnegie Mellon University. He still likes math and doesn't like to use C (instead he uses O'Caml or Haskell).
[Amir Zeldes] XRENNER (eXternally configurable REference and Non Named Entity Recognizer) is on PYPI now. This is a Coreference resolution tool described in the paper my mentor Amir Zeldes and I co-authored at NAACL16 CORBON workshop. You can now install it using pip install xrenner. LINK: https://pypi.python.org/pypi/xrenner/
we are releasing the Good-sounds Dataset. It contains monophonic recordings of instrumental sounds and accompanying metadata. The dataset is composed by 8750 recordings of 12 different instruments (flute, cello, clarinet, trumpet, violin, alto sax, tenor sax, baritone sax, soprano sax, oboe, piccolo and double bass) recorded by 15 different musicians. Some of this recordings were performed using different microphones at the same time, leading to a total amount of 16308 audio files.
Two kind of recordings are available: single notes of almost the whole range of each insturment as well as some scales.
The dataset is freely available for download here.
It was described in the publication "A real-time system for measuring sound goodness in instrumental sounds" and it is used in the one we will present in the next ISMIR'16: "Good-sounds.org: a framework to explore goodness in instrumental sounds"
Oriol Romaní Picas
Music Technology Group
Universitat Pompeu Fabra
note:this is also added to Freesound under user MTG.
Georgetown Univ. Computational Linguistics Group held its first meeting in Sep.16, 2016, for the Fall term, organized by Nathan Schneider(our new professor of LING and CS). It is very exciting to see some ~20 people from LING and CS depts who are working on CompLing and IR topics, among others. After years of development I feel that the CompLing is growing at a good rate here. Look forward to many more activities and collaborations to come.