TAUS Executive Forum in Tokyo was held in May 16-17, 2018, hosted by Oracle Japan. Most of the participants were from local language service providers, who, I suppose, are struggling and trying to find an effective way to live with neural machine translation. So, am I.
My name is Lilico Hara. I am a project manager & division director of DEPRO Co., Ltd which is a language service provider located in Tokyo. I am not an enthusiastic about attending conferences. But periodically, I attend this kind of events to check whether our current ideas/direction/efforts are in line with the current trends or not I was very inspired by the new ideas, and technologies introduced and enjoyed networking with people working in the same field. I hope this brief report can serve as a good representation of my impressions of the TAUS Executive Forum 2018.
New technologies are always interesting. I was surprised about the system introduced by the Director of Spoken Translation, Dr. Mark Seligman. He showed us a demo of “vocal machine translation”. A speaker wore a headset, pressed a button and started to speak. Amazingly, translated texts appeared on the screen! How could it happen? From a practical point of view, more improvement is surely needed, however, I observed that some of displayed translations were contextually correct. Congratulations! I am looking forward to witnessing further development of this system.
Among interesting presentations and discussions, I will focus on 3 topics, which relate to our company very closely.
Our efforts for the adoption of machine translation started in 2008. At the time, nothing much was known about Neural MT at least at an LSP level, and it was still a commonly believed that Japanese was one of the most difficult languages to get reasonable result from machine translation. We believed the situation would not change for the next 10 year. But it seems like our presumption was wrong.
Localization projects have become so diverse that every client has its own requirements, and quality expectations. 15 years ago, when I jumped into the localization field, I remember our company had just adopted Trados 3.0. Looking back, localization projects were simple and easy to handle, in a so-called “waterfall” style which meant that when a product was released, we localized its software first, followed by document localization. Always big volumes with reasonable turn around time (TAT). The request of the clients was simple: good quality with a reasonable cost.
In the past decade, localization projects have changed a lot. Most software is now released in multiple languages at the same time (simship). As a result, we receive agile translation projects with a small portion & short TAT. Many factors are to be considered, such as a varieties of CAT tools, different quality levels, cost, target audience, preferred tone and, most importantly, we need to adopt machine translation in the workflow.
To meet these various requests, Mr. Alan Chung and Mr. Kenji Ozawa of WeLocalize suggested that project managers of an LSP should have a commercial mindset, be always strategic and sensitive with financial analysis, try to optimize and automate the localization process as much as possible while ensuring customer satisfaction. I fully agree with these points. It was hard for me to embody these ideas into my daily work however, this presentation by Alan Chung and Kenji Ozawa helped me understand how to put these concepts into practice.
Another interesting presentation was “Training Framework for Post-Editors to Address NMT Outputs” presented by Toru Shishido of Human Science.
Human Science sent questionnaires to their translators about MT. Their analysis results demonstrate that translators who have a lengthy career in translation might not always be good “post editors” due to their preconceived negative opinion on machine translation.
Human Science’s approach against this kind of negative opinion is “pre-processing” MT-applied-strings before they’re handed over to the translators. Pre-processing fixes some well-known errors such as wrong spacing. Actually, we at DEPRO, almost do the same thing. Human Science created a software component to fix these errors. In our case, with our original tool, we fix not only the spacing issues but also try to detect inconsistencies with the software user interfaces, fix the style issues and additionally, our internal staff check the processed files to confirm what kind of errors remain and try to find additional items which can be fixed next time. After several trials, we found this method extremely effective, though it still needs human interruption from our side. Even if we cannot develop a “better” machine translation engine, we can definitely optimize the outputs with our technical ingenuity.
Finally, “Knowledge café”:
This unique discussion method (refer to Shall we Meet at a Knowledge Cafe?) brought us interesting new ideas. We created 6 groups comprised of 5 or 6 people with different backgrounds to discuss the topics presented before.Group members rotated 3 times. Some ideas were so interesting that I couldn’t have thought of them on my own. At the end of 3 sets of discussions, I got some dim ideas of possible solutions for the problems we currently face.
Future is hard to predict, even for the next 5 years.. But after having attended the forum, I reached some conclusions; which I already shared with our team members:- Be sensitive to newly developed technologies and don’t be afraid of trying new things.
- Always try to find a “better way”.
- Do not just accept client’s or translator’s requests but talk with them until you find mutually agreed solutions. This will surely be the best way to gain their trust.
With all these takeaways, we are already looking forward to attending the next TAUS event.
6 minute read