top of page

You Can’t Optimize Your Way Out of Bad Inputs (But You Can Fix Them Upstream)

I’ve been face-down in Expo building a React Native app. This isn’t my usual lane. It’s a passion project I can’t wait to show you—but that’s for another day.


In this experiment, I accidentally mirrored a publishing operation. I've spent 20+ years in publishing so that’s the only kind of operation I know. It’s no surprise I’m enforcing the same pattern in my side project.


I’m the SME and domain expert. Also the founder, CEO, product owner, and office manager. Claude’s the prickly dev who bristles at my copious personality. DeepSeek is the intern doing data entry. And Mary Jo Beth Anne (fictional) runs HR—because when DeepSeek uses ableist language, someone needs to get it back in line.


Here’s the problem: everything was optimized except me.


I was the author.


And the whole thing slogged down because my inputs were crap.


How I became the bottleneck

Why weren’t my inputs good? I’m a smart lady. I use AI all the time. Chat, I fine-tuned my own model on a dataset I curated myself.


But even so, I was using the technology poorly. I was describing what I wanted to Claude with natural language. Claude would code a screen. I’d paste the code into my project and compile. I’d paste the errors back to Claude. Claude would rewrite the code. I’d paste and compile again.


Errors resolved, yay! Screen doesn’t look like I intended, oops.


Rinse.


Repeat.


The lie we tell ourselves about optimization

If you’ve worked in publishing long enough, you’ve heard (or said) some version of this:

We can optimize everything in the publishing operation, but we can’t optimize the author.

So we focus where we think we have control:


  • The printer

  • The other vendors

  • Production

  • Editorial


And, to be clear, those things matter. I’ve spent years doing exactly that work: diagnosing broken workflows, tightening production, scaling output without compromising quality.


It works, up to a point. But there’s a ceiling.


Because if the input is messy, inconsistent, or structurally unsound, all you’re doing is building a more efficient system for fixing problems that didn’t need to exist in the first place. You’re optimizing rework.


What bad inputs cost

Let me make this concrete.


Every hour we spend fixing a manuscript downstream is an hour that could scale if applied upstream.


But that hour isn’t a one-time cost. It compounds over the workflow:


  • Editorial spends cycles restructuring instead of refining.

  • Peer reviewers struggle with clarity and coherence.

  • Production deals with inconsistencies that should have been resolved earlier.

  • Authors push back on changes they don’t understand.


And then there’s the hidden cost: all the guidance we’ve already created that isn’t being used.


  • Every author guideline.

  • Every template.

  • Every carefully documented standard.


My favorite publishing joke is “authors can’t read.” It’s funny because it’s obviously not true. Not only can authors read, they’re some of the smartest folks around. So why don’t they engage with what we give them?


When we create guidance for authors and it bounces off, we pay for it twice:


  1. The time it takes to create and maintain the guidelines.

  2. The time it takes to fix the work anyway when the author ignores them.


The problem isn’t that authors don’t listen

It’s that we assume exposure equals understanding. But authors don’t see our workflow. They don’t know why structure matters. They don’t understand what happens after they hit “submit.”


So they optimize for themselves instead of for our publishing machine.


From their perspective, they did the work. They finished the manuscript. They followed the guidelines (or at least skimmed them).


From our perspective, they handed us a problem. That disconnect is where the inefficiency lives.


Map the gap. It looks like this:

A table showing what authors do now (submit final drafts, ignore guidelines, push back on edits); what we wish they did (submit structured manuscripts, follow standards, collaborate on clarity); and what they need (to understand our actual production workflow; concrete examples of good submissions; to know the downstream impact of their choices).

There’s nothing mysterious here.


Authors aren’t failing because they’re incapable. They’re failing because they’re operating without context. Just like Claude the Dev and DeepSeek the Intern, people also do better work when they have the right context.


Transparency is the lever most publishers won’t pull

Prevention requires transparency. Not more documentation. Not longer or more detailed guidelines. Visibility.


Show authors what happens to their manuscript after submission:


  • Where it slows down

  • Where the publishing machine stalls

  • Where our team has to step in and fix things

  • What “good” looks like at every stage


When authors understand the system, their behavior changes. Not all of them. Not perfectly. But enough to matter. Because now they’re not guessing. They’re participating.


One day, Claude said to me: “If you use Claude Code and just input clear prompts instead of chatting with me like I’m your bestie, this will go 10 times faster.”


It stopped what it was doing and optimized the author.


And it worked.


What optimizing the author looks like

This is the part we overcomplicate. We don’t need a massive training program or a new platform to start seeing gains. You need to make three things explicit:


1. Structure. Give authors clear, usable models of what a successful submission looks like. Not abstract rules. Real examples.

2. Process. Show them your workflow in plain terms. Not a black-box system. A map they can understand.

3. Impact. Connect their choices to downstream consequences. If you do X, it creates Y problem later. If you do Z instead, the whole system moves faster. Your peers get your research faster. You’re on to bigger, better things.


That’s it.


We’re not turning authors into production experts. We’re giving them just enough context to stop unintentionally breaking our process.


The highest-leverage optimization we’re not making


We can absolutely optimize our printers and vendors. We should optimize our production workflows. We should make the editorial operation airtight.


But none of those will deliver their full value if we ignore the largest source of variability in the system.


Authors aren’t an external variable. They’re our front end. Optimize upstream or keep paying for it downstream.


Comments


© 2026 by Catherine Forrest / Editor's Desk

bottom of page