Most of the systems deciding what you see, who finds you, and sometimes whether you get a shot…are black boxes.
Your feed. Your “For You.” Your recommended jobs. Your email spam filter. The scoring tools that flag résumés as “good fit” or “ignore.” Even the AI models that help you write or plan. They all sit between you and reality, quietly making decisions you don’t see.
This post isn’t “algorithms are evil.” It’s the opposite: algorithms are powerful, and that’s exactly why you can’t afford to let them replace your judgment—especially if you’re a student or early‑career candidate.
We already live inside other people’s filters
Think about a normal day:
- You open Instagram or TikTok: a ranking system decides which faces, bodies, and lifestyles you see first.
- You open your email: a spam filter hides certain messages before you ever read them.
- You search for internships or programs: recommendation engines surface some options and bury others.
- You apply for a role: an ATS (Applicant Tracking System) might filter you out before a human blinks.
All of that happens before your own thinking kicks in.
The danger isn’t that these systems exist. The danger is when we forget they exist—and start treating their outputs as the full story.
A simple example: “No one is hiring” vs “No one is hiring me here”
I’ve seen students (including myself) fall into this trap:
“I applied everywhere. No one is hiring.”
Then you look closer:
- All applications are to the same kind of company.
- All came through the same job board.
- Profiles and keywords haven’t been updated since high school.
- Geographic filters are accidentally on.
- The candidate never reached out to a human.
It’s not that “the world doesn’t hire.”
It’s that one slice of a system, optimized for something you don’t fully understand, filtered you out.
If you treat the output of that slice as a universal truth, you stop trying. You start building a story about yourself (I’m not good enough / I’m not the type / there’s no path for people like me) based on the judgment of a black box that never actually saw you.
The fake comfort of “the algorithm decides”
There’s a weird comfort in blaming algorithms for everything:
- “The TikTok algorithm hates me.”
- “LinkedIn’s algo doesn’t show my posts.”
- “The ATS is biased, so what’s the point?”
- “AI will replace us anyway.”
Some of that may be partially true. A lot of it is giving away your agency.
When you say “the algorithm decided,” you’re also saying:
- I don’t need to think about my strategy.
- I don’t need to analyze what’s actually happening.
- I can’t be expected to experiment, adapt, or ask different questions.
It feels safe. It’s also dangerous. Because the people who learn to co‑exist with these systems—without worshiping them—will quietly move ahead.
How I try to use algorithms and AI without outsourcing my brain
I’m not anti‑algorithm. I’m not anti‑AI. I use them every day. The difference is I try to treat them as tools and mirrors, not oracles.
Here’s how I think about it:
1. Ask: “What is this system optimized for?”
- A social media feed is optimized for engagement, not truth.
- An ATS is optimized for speed and rule‑based filtering, not potential.
- A recommendation algorithm is optimized to keep you on the platform, not to expand your worldview.
When I remember that, I stop taking their outputs so personally.
2. Use them to generate options, not decisions
I ask AI to:
- Brainstorm title ideas
- Suggest structures for documents
- Draft a first version of an email or description
But I decide:
- What feels accurate
- What is ethical and appropriate
- What I’m willing to stand behind with my name
AI can speed up the “what if?” stage. It cannot carry the “I’m responsible” stage.
3. Run human reality checks
If a feed, search result, or model output is telling me something important about my life, I try to:
- Ask a human: a professor, a friend, a recruiter, a mentor.
- Look for a second source: another platform, another dataset, another example.
- Check against my own lived experience: does this line up with what I actually see offline?
If all three disagree with what the algorithm suggests, I don’t ignore that.
4. Track patterns instead of obsessing over single outcomes
Instead of reading too much into one flopped post or one ignored application, I look for patterns:
- Which type of post consistently reaches the right people?
- Which kind of outreach consistently gets a response?
- Which prompts consistently bring useful AI outputs?
Patterns teach. Single outcomes mostly just trigger emotions.
Why this matters more for people like us
If you’re a student, early‑career, from a less represented background, or building from outside major centers of power, algorithms will:
- Decide which role models you see
- Decide which scholarships, programs, or schools appear in your feed
- Decide which of your efforts look “credible” enough to surface
If you accept those decisions as neutral and complete, you risk living inside a shrunk version of your life, curated by people who’ve never met you.
That doesn’t mean you can hack every system. It means you can do three things:
- Stay aware that you’re always working inside some kind of filter.
- Learn enough about it to adapt your strategy.
- Refuse to confuse convenience with truth.
My rule for myself
Whenever I catch myself saying “the algorithm did X,” I try to follow it with:
“Okay, but what do I think, and what am I going to do about it?”
That’s where judgment lives. And judgment is the thing no model, feed, or hiring tool can fully replace.
If we give that up, we’re not being “realists.” We’re just making their job easier—and our lives smaller.
Leave a Reply