Press TechRound interviews Secure.com CEO on the future of AI security
Read

5 Threat Hunts Any L1 Analyst Can Run Today

Discover 5 simple threat hunting queries any L1 analyst can run today no complex tools or skills required. Start detecting real threats fast.

Key Takeaways

  • Threat hunting doesn’t require years of experience — L1 analysts can start with basic log data and simple queries
  • Most attacks go undetected for months; proactive hunting cuts that window dramatically.
  • These 5 hunts map directly to real-world attacker tactics in the MITRE ATT&CK framework.
  • You don’t need custom scripts — logic works in any SIEM your team already uses.
  • Each hunt starts with a hypothesis, not a tool — the mindset shift matters more than the platform.

Introduction

That’s not an edge case. According to IBM’s 2024 Cost of a Data Breach Report, it took organizations an average of 204 days just to identify a breach and another 73 days to contain it. The longer an attacker hides, the more damage they do.

Here’s what shifts that number: analysts who go looking instead of waiting.

Why L1 Analysts Can (and Should) Threat Hunt

Most people assume threat hunting is reserved for senior analysts with years of experience and expensive tooling. That’s not really true.

The core skill isn’t technical — it’s asking the right question. “What would an attacker do here, and what traces would they leave?” That question is available to every analyst on day one.

According to the 2024 SANS Threat Hunting Survey, 63% of organizations that adopted threat hunting saw measurable improvements in their security posture. The barrier isn’t skill level — it’s starting.

What You Need Before You Start

  • Access to your SIEM (whatever your organization uses)
  • Windows Event Logs enabled and feeding into that SIEM
  • A basic understanding of what “normal” looks like in your environment
  • A hypothesis (just one question to investigate)

That’s it. No certifications required.

The 5 Hunts

Hunt 1: Who Is Running PowerShell at 2 AM?

PowerShell is one of the most abused tools in a modern attacker’s playbook. It ships with every Windows machine, it’s trusted by the OS, and it can download files, move laterally, and pull data out of the network — all without dropping a single suspicious executable to disk.

Your hypothesis: Someone is using PowerShell outside business hours, or with encoded commands to avoid detection.

What to look for:

  • PowerShell processes running outside normal business hours (e.g., after midnight or on weekends) for accounts that typically work standard schedules
  • Commands that include -EncodedCommand, -enc, or -e (these hide the actual command in Base64)
  • PowerShell spawned by unusual parent processes like a word processor, spreadsheet app, or email client
  • Commands that include DownloadString, Invoke-Expression, or iex

Sample query logic (adapt to your SIEM’s query language):

Hunt 1 PowerShell suspicious command query
source = PowerShell Operational Log
filter:  CommandLine contains "EncodedCommand"
      OR CommandLine contains "iex"
      OR CommandLine contains "DownloadString"
group by: Hostname, Username, CommandLine
-- Adapt syntax to your SIEM's query language
4104
PowerShell script block logged — captures encoded and suspicious commands
4103
PowerShell module logging — records all executed commands

What a hit looks like: A word processor spawning PowerShell with a Base64-encoded string at 11 PM. That’s a well-documented attacker technique and something no automated alert would necessarily catch.

Hunt 2: Is Anyone Logging In From Two Places at Once?

Compromised credentials are the most common breach entry point — responsible for 16% of all breaches in IBM’s 2024 report. And they’re the hardest to catch because they look exactly like normal user behavior.

Your hypothesis: A user account was compromised and is being used from an unusual location or time.

What to look for:

  • The same username logging in from two different countries within minutes (impossible travel)
  • Successful logins after multiple failed attempts in quick succession
  • Logins at 3 AM from users who typically work 9-to-5
  • First-time logins from countries your organization has never seen

Sample query logic:

Hunt 2 Impossible travel / credential abuse query
source = Windows Security Logs
filter: EventCode = 4624 OR EventCode = 4625
group by: AccountName, SourceIP, TimeStamp
flag: same AccountName, different country, within 10 minutes
-- Enrich SourceIP with GeoIP for location comparison
4624
Successful logon
4625
Failed logon attempt
4648
Logon using explicit credentials — someone authenticating as another account

What a hit looks like: A finance user in one city logs in successfully at 8:47 AM — then the same account shows a login from a foreign IP at 8:52 AM.

Hunt 3: Is a User Touching Machines They Never Touch?

Once an attacker gets a foothold on one machine, they move laterally — hopping from system to system, looking for credentials, sensitive files, or a path to the domain controller. This movement is quiet. It uses legitimate Windows tools and leaves no obvious malware trail.

Your hypothesis: A standard user account is authenticating to systems outside their normal behavior — a possible sign of lateral movement.

What to look for:

  • A regular user account logging into servers (not just workstations)
  • Authentication to multiple machines in a short window (3 or more systems in under 10 minutes)
  • Network logon events (Logon Type 3) to hosts this account has never touched before
  • Process creation on remote machines originating from a workstation

Sample query logic:

Hunt 3 Unusual network logon activity
source = Windows Security Logs
filter: EventCode = 4624
   AND LogonType = 3  -- network logon
   AND AccountType = "standard user"
flag: AccountName authenticates to 3+ unique hosts within 10 min
flag: TargetHost is a server, not a workstation
4624
Successful logon — filter for Logon Type 3 (network)
4648
Explicit credential use — someone authenticating as another account
7045
New service installed remotely — common remote execution technique

What a hit looks like: A marketing analyst’s account is authenticating to a database server at 11 PM on a Saturday. That’s worth a conversation — at minimum.

Hunt 4: Has Anything Set Itself Up to Run Every Time the Computer Starts?

Persistence is how attackers stay in an environment after a reboot. Once they’re in, they plant something that re-runs their code automatically — a scheduled task, a registry run key, a service. These survive reboots and often survive antivirus scans because they use trusted Windows mechanisms.

Your hypothesis: An attacker has planted a persistence mechanism to maintain access across reboots.

What to look for:

  • New scheduled tasks created outside business hours or by unexpected user accounts
  • New entries added to common Windows startup registry locations
  • New services installed on machines that don’t typically get new services
  • Tasks with names that sound legitimate (like WindowsUpdateTask or SystemCare) but weren’t there last week

Sample query logic:

Hunt 4 New scheduled task created outside business hours
source = Windows Security Logs
filter: EventCode = 4698  -- scheduled task created
   AND CreatedBy != "SYSTEM"
   AND CreatedBy != "Administrator"
flag: CreationTime outside 08:0018:00 local time
U
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run
Per-user startup — runs when that user logs in
S
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Run
System-wide startup — runs for every user on every boot
4698
A new scheduled task was created
7045
A new service was installed on the system

What a hit looks like: A scheduled task named MicrosoftEdgeUpdater created at 2:13 AM by a standard user account — not by the system, not by an admin. That’s worth investigating immediately.

Hunt 5: Is Anyone Pulling Data They Shouldn’t Be?

Data exfiltration is often the final stage of an attack. Catching it before it completes can be the difference between a bad week and a catastrophic breach. Attackers staging data typically move large volumes to a single folder, compress it, then push it out.

Your hypothesis: A user or process is collecting or moving unusually large amounts of data in preparation for exfiltration.

What to look for:

  • A user account accessing far more files than usual in a short time frame
  • Large files being copied to temp folders, the desktop, or removable drives
  • Unusual outbound data volumes to IP addresses your organization hasn’t communicated with before
  • Compression tools (7zip, winrar, tar) running against directories containing sensitive data
  • Bulk file downloads from cloud storage in under an hour

Sample query logic:

Hunt 5 Bulk file access and outbound data query
source = File Activity Logs + Network Logs
flag: single Username, 500+ file reads within 60 min
flag: large outbound transfer to new external DestIP after 22:00
flag: ProcessName in ("7z.exe", "winrar.exe", "tar.exe")
       running against a sensitive directory
-- Cross-reference with user's normal baseline activity

What a hit looks like: A sales account accessed 3,400 customer records between 11:45 PM and 12:30 AM. A .zip file then appeared on their desktop. That’s a serious lead — escalate it.

How to Build the Habit

Running these once is useful. Running them weekly is what actually shifts your team’s detection capability.

A few ways to stay consistent:

  • Pick one hunt per shift. Don’t try to run all five at once. One solid investigation beats five shallow ones.
  • Document what “normal” looks like. Every hunt gets sharper when you have a baseline to compare against.
  • Log your findings — including clean results. A clean result this week tells you something useful next week.
  • Use MITRE ATT&CK as your map. Each hunt above ties to real, documented techniques: T1059.001 (PowerShell), T1078 (Valid Accounts), T1021 (Remote Services), T1053 (Scheduled Tasks), T1041 (Exfiltration).

When Your Analysts Hunt, Secure.com Handles the Noise

L1 analysts running these hunts are doing exactly what great security teams do going looking, not just waiting. But proactive hunting only works when your analysts aren’t buried under hundreds of unvalidated alerts every shift. That’s the gap Secure.com’s Digital Security Teammates are built to close.

Here’s how Secure.com directly supports the work L1 analysts are doing:

  • Reduces alert volume and false positives by up to 80%. The five hunts above require focus, curiosity, and time to investigate. When analysts are drowning in low-fidelity alerts, none of that is possible. Secure.com’s AI-driven case management automatically triages repetitive, low-fidelity alerts so analysts can dedicate real time to proactive hunting — not just surviving the queue.
  • Automates reactive alert triage so analysts can focus on proactive threat hunting. Alert triage and threat hunting both matter, but they compete for the same hours. Secure.com’s Digital Security Teammates take on the high-volume, routine triage work — freeing L1 analysts to run hunts like these consistently, not just occasionally.
  • Enriches alerts with threat intelligence and asset context before escalation. Rather than handing analysts a raw flood of events to sort through, Secure.com enriches and prioritizes what gets escalated — meaning when a hunt surfaces a genuine threat, analysts have immediate access to enriched context for faster response.
  • Builds space for the habit to stick. One of the hardest parts of threat hunting isn’t the skill — it’s finding the time when 60-80% of analyst hours are consumed by alert triage. When your team isn’t stretched thin responding to noise, a weekly 30-minute hunt becomes realistic instead of aspirational.

The best threat hunters aren’t the ones with the most tools — they’re the ones who have the space to go looking. Secure.com creates that space.

FAQs

Do I need special access or tools to run these hunts?
Most of these hunts work with standard SIEM access and Windows Event Logs. The main requirement is that endpoint logging is enabled and feeding into your SIEM. If your organization has Windows event logging turned on, you have everything you need to get started.
What if I get a hit and what do I do next?
Don’t panic, and don’t act alone. Document what you found — timestamp, hostname, username, command or file path — escalate to your team lead or senior analyst, and follow your organization’s incident response process. A hit doesn’t automatically mean a breach. It means something worth looking at.
How is threat hunting different from alert triage?
Alert triage is reactive — you respond to what your tools flag. Threat hunting is proactive — you go looking for things your tools might have missed. Think of triage as answering the phone. Hunting is making the call first. Both matter, but hunting fills the gaps that signature-based detection leaves open.
How often should L1 analysts run these hunts?
Once a week is a solid starting cadence. Some teams run one hunt per shift. The goal isn’t frequency — it’s consistency. A 30-minute hunt done every week builds institutional knowledge and detection muscle over time, and each iteration sharpens your baseline.

Conclusion

Attackers count on staying quiet. The longer they go undetected, the more damage they do — and the data backs that up. Catching them early doesn’t require a senior title or years of specialized experience.

These five hunts cover the most common attacker behaviors: abusing PowerShell, stealing credentials, moving laterally, planting persistence, and staging data for exfiltration. Each one starts with a question and works with the log data your organization is already collecting.

Start with one. Run it this week. See what your environment tells you.

The best threat hunters aren’t the ones with the most tools — they’re the ones who go looking.