TryHackMe: A Bucket of Phish — Writeup
Published on May 30, 2025
Challenge: TryHackMe: A Bucket of Phish
Difficulty: Easy
This room involved analyzing a fake "Cmail" phishing site hosted on an S3 bucket:
http://darkinjector-phish.s3-website-us-west-2.amazonaws.com
Objective: Identify and retrieve the list of victim users phished via a fake login page.
Initial Analysis
First thing I did was check out the site and immediately knew something was off about this "Cmail" login page. I hit view-source:
to peek under the hood and confirmed there was no <script>
tag or any client-side JavaScript handling the input.
The form had action="/login"
with method POST
, so I figured I'd test it out. Tried logging in with random credentials and got hit with a 405 Method Not Allowed
. This was the lightbulb moment - S3 static hosting can't handle POST requests, which meant this entire form was just for show and wasn't actually doing anything functional.
My Theory
Since there was no JavaScript capturing credentials client-side and the site is completely static, I figured any captured data had to be saved manually in some kind of flat file sitting on the server. The question was: where?
Directory Brute Force
I started throwing wordlists at it with ffuf to try and discover any hidden files:
bashffuf -u http://darkinjector-phish.s3-website-us-west-2.amazonaws.com/FUZZ -w /usr/share/wordlists/dirb/common.txt -e .txt,.csv,.log,.bak
What this command does:
- -u
- Target URL with FUZZ
as placeholder for wordlist entries
- -w
- Wordlist containing common directory/file names
- -e
- File extensions to test (creates "admin.txt", "backup.log", etc.)
I tried multiple common wordlists and extensions but kept coming up empty. The filename obviously wasn't following any standard naming convention, so I was just shooting in the dark.
S3 Enumeration Success
This is where I had my "duh" moment. Instead of blindly guessing filenames, why not just ask the S3 bucket directly what files it has? I used the AWS CLI to list the bucket contents:
bashaws s3 ls s3://darkinjector-phish --no-sign-request --region us-west-2
That command lists the contents of the S3 bucket named darkinjector-phish
without signing the request, meaning it's treating it as a public bucket. The --region us-west-2
tells the AWS CLI which region the bucket lives in.
and boom! Found the file: captured-logins-093582390
No wonder my ffuf attempts failed - who would've guessed that random number suffix?
Getting the Data
Now that I knew the exact filename, I could access it directly:
bashcurl http://darkinjector-phish.s3-website-us-west-2.amazonaws.com/captured-logins-093582390
The output showed a list of user credentials that had been "captured" by the fake phishing site, along with the flag I was looking for.
Security Issues Found
This room highlighted several critical security flaws:
1. Public S3 Bucket The bucket allowed anyone to list its contents without authentication. This is a massive misconfiguration that exposed all stored files.
2. Unprotected Sensitive Data Stolen credentials were stored in a publicly accessible file with no access controls whatsoever.
3. No Monitoring There were no logs or alerts for unauthorized access attempts.
Key Takeaway
This room was a great reminder to always verify what a login form actually does before making assumptions. Static S3 buckets fundamentally can't process POST requests, so if there's no JavaScript present, any "captured" data has to be sitting in an exposed file somewhere.
The real lesson here? Don't brute first—think first. Understanding the infrastructure and thinking through the limitations saved me a ton of time compared to mindlessly throwing wordlists at the problem. Sometimes the direct approach (like S3 enumeration) beats the brute force approach, especially when you're dealing with non-standard file naming.
Pretty sneaky way to simulate a phishing attack while teaching some solid enumeration fundamentals and cloud security principles!