No prep, no magic.
I recently ran a 5k on a whim, which is basically how I learn lessons the hard way. The decision was made fifteen minutes before the start of the race, from my couch, and I showed up at 9:04 for a 9AM start. It was not my first 5k, but shortly after I started, my body began shutting down and felt fully committed to stopping as soon as possible, even though it kept going until the 2.5km mark. I continued and finished among the last set.
My body had voted “no” once it found out I was trying to beat the system. No warm up. No mental runway. Not even the thought, “I’ve got a race coming up.”
Machine learning systems follow the same symmetry. Scientists spend close to 80 percent of their time cleaning, preparing, and understanding data before model training. Bodies and models have the same attitude toward spontaneity. They don’t care what you meant to do, but what you prepared for. No prep, no magic.
The counterargument is that people sometimes do things spontaneously and discover new capacity. Some runners show up late and still run well. Some models do fine with minimal preprocessing. But this is not a philosophy. It is variance. On a general scale, preparation expands and primes whatever potential you have already built and makes it ready to be served. Spontaneity can reveal talent, but preparation makes talent deployable.
Warm ups affect performance in two main ways. The first is neural. A warm up signals to the brain that a serious event is about to occur and shifts the nervous system into a state where it can be stretched for the activity. The brain allocates more cognitive fuel to the task. When you skip this transition, you begin the event with the nervous system moving and asking questions at the same time. You are running while still negotiating with the system, and that negotiation is expensive. Early discomfort registers as a stronger warning than it should.
The second is physical. Warmed tissue moves with less resistance, joints glide better, blood flow increases, and the body enters a ready state for the main event. Without a warm up, you spend more power on the same movement, and that hidden cost shows up later as early fatigue, poorer rhythm, and a weaker ability to enter flow. The run feels harder than the distance deserves because you are paying for friction you could have reduced before you started.
However, if preparation starts stealing the joy of the actual event, you have prepared too much.
Over preparation sits on the opposite end of the spectrum. Under preparation makes you pay interest during the event. Over preparation makes you perform the event before the event. In running, too much warming up can leave you slightly drained or mentally spent, as if the nervous system already did the hard part and now resents the main task. In model training, over engineered preprocessing can strip away the very nuances that would have helped the model learn. You can smooth the data so much that you delete signal along with noise. You can impose your assumptions so strongly that the model has nothing left to discover.
Preparation does not have to be elaborate. The goal is not to do more work before the work. The goal is to change the system state so the main work costs less and expresses more. In practice, you want the mind to stop treating the session as a surprise, and you want the body to stop treating the first reps as a negotiation. The smallest mental version of this is simply knowing in advance that training is happening today, letting the idea sit long enough for your nervous system to orient toward it. The stronger version is arriving early enough to rehearse the first few minutes of effort and remove the shock of transition. When the mind arrives late, the body compensates by withholding.
Physically, the point is the same. The early minutes are where friction lives and movement is still searching for coordination. A good warm up tries to remove friction, raise temperature, and wake up relevant patterns, letting you touch the event at a lower cost before the event begins. It is the same logic as data preparation. You clean it so the model spends its capacity learning signal instead of fighting noise.
If you want to use what you have built, you have to prime it. If you want to perform well, you have to arrive early enough for your system to believe you.