My Introduction

This is a meeting between translators and our futures. I’m not certain I’m going to have a job in five years. Nobody knows what’s going to happen, but we really have no alternative but to just run like hell.

The announcements from OpenAI recently were astounding. I haven’t myself caught up on what we had a year ago, and now they’ve given us more. Some of it is ready to use in CotranslatorAI. Big things: multimodal, context window, faster, cheaper, better. 

These have already made our recent presentations obsolete. Things are changing so fast. It’s not clear that losing context is relevant anymore. OpenAI was already the undisputed leader even before the latest announcements. Nobody else’s model is even close in most use cases I can think of, and now even more changes that put them even further ahead.

 

Creating a GPT

The GPTs might become translation bots. But so far, they aren’t working too well. Vladimir was finding out that they don’t follow instructions reliably with the GPTs; but that it did work with custom instructions well before. Silvia found that it forgets things after logging off or after leaving and coming back. Philippe found that even if they trained on specific documents, it would still bring in outside data, and even after telling it not to answer outside those specific documents. And Silvia’s bot is slow and not complete, and doesn’t always work, and only translates a certain amount. Sometimes it says it can’t translate something, and Vladimir said it was resulting in apparent errors. 

Silvia is concerned that OCR in the AI could make it too easy to copy books. Philippe asked if the GPTs are available in the API, which means the confidentiality problem is not solved here. I think the Assistants API is analogous to the GPTs. When Dave tried uploading a file, it said that he needed to upload a json file. Perhaps that was a glitch and it might work better if he tries it again.

GPTs are clever because we can share them with others, and it took a file upload and gave a file back for download. It can also get data off the internet, you can upload a graphic for reference. So even if it doesn’t work so great now, imagine where it will be in a year from now, considering how far we’ve come in the last year.

One nice application might be like what Trados is doing with an AI bot to answer questions from the knowledge base. We could put it on a website, or help files, etc.

Steps to create a GPT:

  1. Create one
  2. Then start talking with it about what you want to do. It will create a graphic and a name and build out the GPT.

It is not clear to me just how different this is from a well-designed anchor prompt in CotranslatorAI.

 

Long-document workflow

The long-document workflow changes with larger context, as it can really run up the tokens. The best solution idea I’ve got is to do it in bigger chunks at once, rather than just one sentence at a time. I’m thinking that this would be a good reason to update the anchor prompt from time to time to keep the content window tokens from exploding. There is also some question about how well the AI will work with a huge amount of reference materials, but in theory, CotranslatorAI doesn’t drop anything. And when someone says GPT-4 is not as good as Google Translator or DeepL, I have to wonder if they are manipulating the context appropriately.

If there are problems with an output, it could be caused by various reasons. 1) The AI might just not be ready for that task, 2) the prompt may need improvement, 3) 

How can I use GPT-4-Turbo?

It is available in ChatGPT+ already. In the API, it is gpt-4-1066-preview (assuming you have access to the GPT-4 model). At some point in the near future, they will probably release GPT-4-Turbo as the name. The advantages are that it is cheaper, has much more context, and quality is better. There’s no reason to use the old model now.

 

What’s the Playground?

You can use the latest models there as well inside your developer account (platform.openai.com).

 

What’s going on with Altman going to Microsoft?

We kind of have to wonder if Microsoft is just going to swallow OpenAI. Something like 700 employees are threatening to go to Microsoft if the board of directors doesn’t quit. It’s amazing to think that Microsoft wouldn’t know they were going to fire Altman in advance. And then Microsoft stock went higher than ever. I think we really can’t know what happened.

 

Why not just use Bing for GPT-4 if it’s free?

The OpenAI models are available through at least two channels: API (confidential) and other ways, such as free means (not confidential). Even data protection in ChatPT+ isn’t that great. Bing cuts you off after five questions anyway, and they keep data for something like 6 months or 18 months. The privacy policy is the same under the Enterprise and API, but I don’t know what all is involved in that plan. Free AI environments aren’t the right place for a professional workflow with data privacy, and even that’s not enough for some clients.

How much do tokens cost?

It’s approximately 1000 tokens for 750 English words. Pricing is at https://openai.com/pricing. It takes more tokens to represent words in languages than other languages. Tracking tokens in CotranslatorAI would be a way to do it, and somehow it converts accurately with something built into the tool. And incoming and outgoing token costs are different. GPT-4 is about 10X more than GPT-3.5. At the levels I’m using AI (all GPT-4), I finally hit $50 in a month, but usually $20-30. You have to do the math to decide if it’s worth it.

 

I pay $20/month for ChatGPT; why do I need tokens?

You do have to watch tokens in the API and you get confidentiality and can build a more professional workflow. If you use ChatGPT+, you can use it all you want, but there is some throttling, confidentiality is weaker, the output is wordier, and (at least until recently), the context was leaky. 

 

Will the output be the same whether you access the OpenAI models through ChatGPT+ or the API?

No, they are not the same. I’m not sure how they are different, but though they are the same foundation model, they are somehow trained differently. For example, ChatGPT+ is more talkative. Perhaps some testing should be done to compare, but I haven’t done that yet.

 

Is anyone considering, or already using different pricing structures for clients, based on increased output? Is being transparent about using AI and reviewing/being the “human expert”?

I’m being transparent about what I’m doing. I think that per-word rates will have to come down; whether hourly rates have to come down is a different matter though. Now that we are starting with better translations from the AI, we can’t charge what we used to do when we were starting from scratch. Some translators think we should protect our pricing, just like a doctor or an accountant might.

Michael suggested pricing for various levels of quality: bad, medium, and high. So we justify our pricing based on that and the technology we use. I say that the market will have to support various levels of quality. I also mentioned that when I’ve tried to offer various pricing for different quality levels, it doesn’t work. 

Candice says we should charge whatever the market bears. But with agencies, pricing is getting lower and lower. Translators will have to go to direct clients and she’s not lowering her pricing to them, at least until her clients start to demand it. She thinks Wordscope is good, but $40/month is too much, because she doesn’t have enough work. Philippe suggested that she look at the value it adds, rather than the absolute cost.

Silvia pointed out that many clients are providing the tools for translators to use. Considering that we still also often have to have at least one CAT tool, such as memoQ or Trados, a tool like Wordscope will be particularly challenged in that they really are asking translators to buy yet one more tool for just the fraction of jobs that might benefit from it.

Translators trying to say that the AI only helps 20% may not be realistic. I find that 50% is plausible in many cases. Vladimir says that AI doesn’t work so well between English and Slovak. 

Don’t forget that there are also transaction costs in the process that have to be considered. I’m finding that overhead on small jobs is increasing, so reducing minimum charges is out the window.

Silvia mentioned the mental fatigue from working on prompt engineering. I pointed out the risk factor for trying to do things with the AI that may or may not work.

I’m not sure you can argue for the same hourly rate if you’re not actually adding more value in that hour because of the AI. The translator must find ways to add more value than the AI; just tagging along as the AI does the work isn’t going to help sustain rates.

Michael wants to show clients how much better he does than the MT. But if we can’t show the client that, then we can’t justify our work. I think they are harder to find than they used to be. Five years, the MT got it to 20%; now the AI gets it to 70%. So the value we can add is less than it used to be.

Candice is selling herself as an indispensable right-hand person for language issues and that she can add cultural consulting aspects as well. And suggests we go to an hourly rate. She wants to focus on relationships. And remember all the things we are bringing in: language and tools/technology. Don’t tell clients all about the AI you’re using. 

So we are finding ourselves needing to bring in a lot of technologies and business skills, in addition to our language skills.

 

How will excellence be measured in technical translation?

Cultural issues are not so relevant with a user manual, or other documents where it doesn’t matter. But GPT isn’t so blind to nuances of culture, etc. For example, in Korean, there are many ways to express formality and respect, but the AI just gets is. Just adding a button to switch between formal and informal might work for some languages, but not with others. Every language will have aspects that can’t be solved with buttons like that. I find that the AI does just fine with context, some training, and then nobody is going to read the document and get offended that. I don’t even have a single prompt in my prompt library to address this aspect.

When I started in translation, every job started at 0% and it was my job to get it to 100%. But over time, that starting point is going up, and the AI is even helping me get the final quality closer to 100%. So our quality window is getting smaller; how do we even measure if the job is good. If we can’t show clients that we do a better job, we aren’t going to be able to get premium prices. Clients aren’t going to pay for what they can’t measure.

So, when clients are satisfied with 80% and the AI can do that, then they might not even need us. So we need to find those markets that really need the 100%.

Is the market going to increase thanks to lower rates? Or shrink as clients take translation in-house?

And so the question is whether the market will shrink? Or whether the AI opens up new possibilities on content that wouldn’t be translated otherwise. Vladimir wants to know how everyone can chase the premium market. 

The concern is that inexperienced translators can replace us with the AI for better quality. Uta is seeing that AI is so accessible to people and she wants to know how we can justify ourselves. Are we now “language experts” and not “translators” any more? She has seen clients with low budgets leave for MT/AI, or they give her MT to edit up. Clients don’t realize the effort it takes to get to the final product. So even if the AI only saves 20%, but clients think it saves 50%. So translators are now able to deliver less value because the AI on its own is getting the translation to a really good starting point.

Vladimir noted that the hype about AI is causing clients to have unreasonable expectations about quality, and this is influencing their willingness to pay.

Silvia noted that she lost money with the AI on a job because of all the effort to engineer prompts for the AI. 

Michael suggestions that we change our professional name to “language expert” and get the translations from 90% to 100%. So we have to focus on that top level, and then we equate our profession with engineers, etc. using hourly rate.

Silvia doesn’t call herself a translator; she says she’s a reviewer. She works on AI in every workflow. So we have more content to check much faster.

Una points out that her markets are shrinking because 80% is good enough for many budgets. Also, she is asked for proofreading instead of translation, so there’s less work if the volume doesn’t increase. She is trying to find new clients.

Vladimir mentioned that he’s experienced lower volumes this year. A lot of translators seem to be saying this, as am I.

Daniel points out that knowledge and content is doubling every certain number of months, so maybe that will result in more similarly more content for translation. So he wants to know how much MT is growing too, in line with the increases in content. Which is growing faster? So translators will do more qualified work, and the more boring stuff will go to the AI. I mentioned that we also have to remember that the number of people actually reading it is growing much slower, and we are already facing content overload. 

 

What does AI mean for MTPE?

My opinion is that MTPE is not relevant anymore. I don’t think many clients can get me a better MT translation than I can get all by myself using the AI. So when a client is sending me something with MT and wants a better price. That isn’t compatible with the situation. Better to lower prices from the get-go based on using AI. I think the answer is “edit-distance”. I’ve never had a client take me up on my model, which compensates for what I change/improve. I saw something that Amazon has something like this.

People who can’t adapt to the reviewer workflow won’t survive. In fact, she’s found that translators can get offended too easily from reviewing, which isn’t a problem with checking AI. She also mentioned that in the past, the machines were learning from us; now we’re learning from the machines. Clearly we’re going to have to adapt.

Una sees that the AI is an extension of using MT.

Michael points out that even when the client provides an MT, he still has to work through the text deeply to produce the final product. He mentions the word “spring” for season or part of a car, and that the MT can’t find that right. He points out that the introduction of spelling and grammar checkers didn’t result in clients demanding rate reductions. I suggested that the “spring” example isn’t relevant any more with context and generative AI.

Vladimir thinks the rates are too low for MTPE because clients think the work is less by 50-55% because it is as good as reviewing a human translation. He says that with AI, it’s harder to evaluate if the original translation is any good, more than with human translation. Vladimir thinks rates will stabilize at the point where their expectations of savings actually match what those savings are.

 

My idea of a “bounty” approach to translation revision

So companies might get their work done with whatever translation process they choose. Then they say they want it perfect and they put their translations out there for linguists to check, on the basis of a bounty for finding errors. And so really perceptive and efficient translators will make a fortune. But others will have their eyes glaze over and leave. Wouldn’t this be a business model we could take advantage of.

 

What do I think about memoQ AGT?

memoQ has just integrated AI into the workflow in rudimentary form.

I have expected that translation roles will be taken by the CAT tools over time and so I’m not surprised that memoQ has added this functionality. It seems that translation might even be low hanging fruit in the processes we work on. But I’ve identified about a dozen use cases for translators that use AI, and translation is only one of them. And so other tools will take those, while CotranslatorAI will fill in the gaps of what other tools haven’t gotten to yet, or can’t get to. 

Also, memoQ AGT still doesn’t use GPT-4. It uses 3.5. Also, it doesn’t use context. And only in 20 languages, and in beta, and translators can’t use it. I’ve been using GPT-4 and context for 9 months. 

But yet, even a few days after memoQ AGT announced their new service, which is supposedly the state of the art but doesn’t even use the 4 model or context, and then OpenAI hit us with yet more stuff. We haven’t even assimilated technology that’s 9 months old, and now there’s more! This was what was shocking to me.

 

Client expectations of correctness

Vladimir thinks that most clients won’t be intolerable of errors enough to justify the extra cost. But I asked why a client wouldn’t be willing to add their document to this “bounty hunter” ecosystem if they only had to pay for mistakes found? There would be no risk to them. I think they’d be willing to pay a lot of money for guarantees of finding the very last mistakes.

Vladimir sees from the quality of MT that clients provide of old translators is low and that client standards are low. So even if we are really great, if the clients aren’t willing to pay for it, then we have a problem. I pointed out that clients won’t pay for value they can’t measure; we have to figure out how to prove our value. It’s not enough to put forward our credentials to prove our value, and the AI is making it harder than ever to keep it up.

If translators think that every word has to be exactly like they like it, then it will take as long as a translation from scratch. However, does the client really care? What’s the point of making the effort if nobody cares? If client perception is the same, then it’s wasted. Better to save 30% effort, charge less and get more work. 

Vladimir asks what happens if his translation goes to a reviewer who complains about the style? He’s afraid the client will not want to hear that the job is just “good”. But I mentioned that if he’s working at that premium level, then great. However, that’s not how many of us work. 

I suggested my recent strategy of matching the client’s requested rate and then matching my effort to that. If the client wants half my rate, then I give it to them that way, and put in half the effort. If the client doesn’t like it, then they can find someone else, because my price was so good. The reality of the market is that the client doesn’t check it properly. They just do a spell check, point out a couple odd wordings and move on. So give them their discount (but don’t tell them you’re giving them the discount) and then do your best within the budget confines. Start with clients you’ve never worked with before, so that you don’t care if you lose them. It’s not like they’re getting the best work anyway at the prices they’re offering. It’s been working for me.

One viewer pointed out that reviewers point out minor mistakes and turn them into big problems. But the reality is that review is a crap shoot most of the time anyway. It can happen if you do your best or not; they don’t like the way you write and it comes back with negative comments. You have to decide if that’s the result of inferior work or just random factors.

I think that hourly pricing is the worst. Because hourly rates aren’t really hourly; they are fixed priced jobs masquerading as hourly jobs because they tell you what your productivity has to be. You have to bill on a productivity-based number so I can keep the gains of my productivity improvements.

Vladimir is not willing to work for lower quality. He wants to be able to deliver good work. I can respect the perspective of being a purist. I’m not a purist; I’m going to give the client what the client wants to pay for.

With the AI, we can now use the AI to work faster and deliver better work than we ever could before for that same effort. So we can lower our prices and still deliver good enough work; it’s not about delivering garbage. If it meets their specifications and I deliver good work doing my best and being productive, I think there is pride to be gotten from that.

In today’s world, flexibility is key. And do clients even really know how good a job it is, and do they really care? And if it serves its purpose, then it’s a good job.

 

AI workflow with context

Silvia points out that now with context, it will become possible to translate entire documents at once. I think that CAT tools will have to adapt in big ways; I haven’t translated in the CAT tool for months with my workflow. I export out to an RTF file and match it up with an anchor prompt. Sometimes I translate 1-2 pages in one go. It’s still not perfect, but if it makes a mistake, it makes the same mistake everywhere, because it is internally consistent. I then import it back into memoQ, do QA and deliver. It’s a huge productivity gain.

And to think that the big tools have still not done anything in response to this; they have not done one thing to acknowledge the power of generative AI in this respect. Is generative AI better than DeepL? Well, DeepL doesn’t have this context. And so they still haven’t properly adapted to the power of the model from a year ago, and now the models just improved overnight. The technology is outpacing the ability of the industry to keep up.

Unusual language pairs

Vladimir says that the style of the writing is really bad in his language pair, even though the terminology is good. He’s found that ChatGPT is not good enough at checking if terminology from another MT engine. I said that the AI is not great at checking things.

Jean-Pierre says Google Translate goes through English between Dutch and Romanian. But he says DeepL and ChatGPT are good between Dutch and Romanian. 

When I first started with ChatGPT in December 2022, I was impressed with the translation. So I asked who ChatGPT used for machine translation and it said it used Google Translate. So later I asked about pivot languages, and it made stuff up about Spanish and French, etc. For weeks I thought that was true; I didn’t realize it was all BS.

I’m somewhat skeptical about just how weak MT is between unusual language scores. MT people are always comparing Bleu (or other scores) and they are trying to one-up each other incrementally. But as translators, we don’t need to do this. We don’t care about a point here or a point there. If the MT can get it to us at a good-enough level, we’ll just jump in and finish up the job to where it needs to be, and a couple points better here or there don’t matter. Sometimes I wonder if generative AI isn’t more powerful for us at the small scale than for big enterprises who need to put out huge amounts of data.

Ines Bojlesen’ testimonial (2:30-2:35)

Translates into Brazilian Portuguese using CotranslatorAI and Trados. She prefers CotranslatorAI and it works really well. She uses it as an editor. She doesn’t want her clients to know she’s using CotranslatorAI. In the beginning, it was slower. But now she’s getting really great results; it’s much faster.

She has translated for 55 years, starting with handwritten, then typed, then electric typewriter, then faxing, and computer and printers at home, and so it’s been a constant learning process.

At 74 years of age, she still tries to keep up and learn more. She remembers when MT came around and she thought it was the end of the world. But now MT is good, and so is AI, and she says we should just keep at it, and keep improving our profession. She says we should stay positive.

She thinks things are really big now with faster spreading of news, and ability to communicate over long distances. But she sees both good and bad things, and it’s just as scary and fascinating as computer and original MT when it first came up. So she sees AI as a wonderful tool, and there’s so much to learn about it. As translators and linguists, they still need us.

Jean-Pierre pointed out that information management has sped up. From language to writing: 50,000 years. From writing to printing: 5000 years; from printing to computers: 500 years; from computers to internet: 50 years; from Internet to email: 5 years, etc…

 

We need to adapt to new roles and fields

So we say to ourselves that “I don’t like revising” or “I don’t like this task or that task”. But it’s not really up to us anymore. We’re in a business where the industry provides us with the technology and the framework that we have to work in. We can’t dictate how we will work. It’s our job to find the opportunities within the current system. And the more we focus in on topics and fields and workflows that start out boring, the more interesting they become as we get deeper into them. Maybe we need to revisit tasks that didn’t seem interesting before and figure out how to adapt to them.

Vladimir asked about evaluating and training MT engines. He said that the pay was really terrible: $8/hour. But I said that a lot of these new fields aren’t getting paid much. Apparently they are paying workers in Africa very low wages to train the AI in certain ways. So training doesn’t seem to be getting paid much; though the guys doing the designing of MT models are getting paid more. This is probably not where we’re going to find our future work as good translators.

Vladimir pointed out that his clients were thinking that revision tasks were really easy. He said he takes longer but does a better job, but that they don’t recognize it. So if clients are not able to recognize the value, then you won’t get paid for it and will have to work for the rate of the masses.

Silvia said that she’s been getting paid almost nothing for training Bard.

 

Agency perspective

One person from an agency left a comment about bashing agencies and how she wishes we didn’t see it that way. But I mentioned that though agencies can be an easy target, they are facing the same challenges we are facing with really competitive markets.

 

What would a viable resistance stance to AI even look like in today’s world?

In years past, we could say the MT didn’t speed us up, it didn’t help, etc. But today, those arguments don’t hold water. What could you do if you didn’t want to cooperate and didn’t want to leave the interest?

Vladimir suggests the premium market. But everybody can’t go there.

 

What could we do to transition into something else in 5-10 years if we need it?

Michael is active in all the new technologies and he’s realizing that he’s getting tired of all the changes, even though he’s on the forefront. What about all the others who aren’t keeping up with this? He’s noting that fewer and fewer students are entering translation studies at the university and many may leave our profession. And relying on volunteers may not work. So this could be a way out for the real professionals who remain, as we add that expert level on top.

If the AI gets so good as to replace translators, then everyone will be out of job. I mentioned that translation may be a use case of the AI that is actually replaceable, more so than others. And likely there will be a big splitting of the market between low and high quality.

 

How would client restrictions on use of AI for confidentiality affect our work?

I refuse to sign agreements that prohibit the use of AI, and going forward, I expect that clients who demand that will lose competitiveness if they enforce it. Or they just get translators to sign who then ignore completely, and it becomes a farce.

They might say things like “You can only use our tool”. CotranslatorAI actually lets you bypass that. 

Vladimir pointed out that if people don’t know what’s going on, it doesn’t hurt them. So I mentioned that I think a lot of clients ask for these agreements without actually caring if anyone follows it. They just have those agreements to meet their “policy”.

In my experience, clients talk about how important they think this is, but they aren’t willing to pay for it. I remember offering a secure process a few years ago where I would do the work without the Internet completely securely. They say it’s important, until they actually have to pay a penny for it. They just want a signed contract to show their client to maintain a fiction that they follow their processes.

Jean-Pierre is trying to follow a process to anonymize content before ever using it in the AI. This is a way to ensure greater security to clients. He was doing it by hand, and now using TransTools+. He dreams of an AI on his own local computer to process it locally and more easily. He uses this to show his process for auditing purposes. He finds that if they think it’s important, then they stick to it. Some don’t even trust the assurances of OpenAI to handle the data securely.

 

Would you recommend young people to join translation?

Michael thinks that the need for interpreters will not go away, especially because of confidentiality issues. So humans will be necessary for that. He thinks that confidentiality issues could become more important over time as information becomes more important. 

I think it would be hard to suggest to a young person to invest in the next 40 years for just translation. So starting with translation could be a good basis for growing into something else later.