Blog

Finding Inadvertently Exposed Files: Tachyon on Cruise Control

Inadvertently exposed files are the epitome of a classic aphorism:  never attribute to malice that which is adequately explained by stupidity.

What are Exposed Files?

The quote, known widely as Hanlon’s Razor, can be applied to myriad circumstances, but at times, it seems like it was written specifically for the cyber security community.  Countless “breaches”, unlike dramatic Hollywood portrayals of brilliant hackers, are the result of careless handling of sensitive data, or innocent mistakes made by overworked IT teams.  The information is simply made available publicly...inadvertently. A file is added to a shared folder thought to be an internal file exchange, but it turns out the data is made available on the web. A configuration file is supposed to be blocked by web server rules, but a typo in the configuration file leaves it wide open - exposing credentials to another system. It could also be some administrator temporarily placing a zip file on the website to send it off to an integrator hired to make changes, but forgets to remove it.

These URLs are not linked from anywhere, but if someone manages to get their hands on the URL, the information is theirs. If they happen to be a bad actor, a public “breach” has now been handed to a criminal on a silver platter.

At Delve, we developed open sourced Tachyon to find these forgotten or inadvertently exposed files, or open configurations. It does so by performing thousands of requests on a remote host. Until recently, Tachyon was akin to be the fabled tortoise:  slow but steady. A recent engine replacement for HammerTime, however, turbo-charged Tachyon, turning our erstwhile tortoise into the hare. Great, right? Not so fast (pun intended).  Tachyon was much faster, but it didn’t play nice with others: it used up all available resources.

With the release of Tachyon 3.3.0, no longer are prayers necessary to prevent the assassination of a poorly configured remote server. Now, by default, Tachyon benchmarks the remote host, identifies the concurrency level to saturate it, then makes sure it never goes that fast again, assuring that disruptions are minimal - largely non-existent - throughout the scan.

How does it work?

The slow start algorithm performs cohort analysis. Although we can't rely on individual requests, a group of requests provides significant insight into the behavior of the remote host. An increasing time-waiting-per-request value means that the request is likely waiting in a queue on the remote host, or the CPU load has increased to a level where it under-performs. Either way, this is not a desired behavior.

We begin with a low concurrency (multiple requests being made simultaneously) and verify what our average time-waiting-per-request should be. Note that this benchmark is not the result of artificial requests; it’s made as discoveries are made. No time is wasted.

Over time, we increase the concurrency. If the time-waiting-per-request is stable, we keep increasing until it is not. When that happens, we reduce below the unstable concurrency level to let the queue recover, then set a ceiling on our concurrency level slightly below the safe maximum to be polite (we were founded by Canadians, after all).

This optimal waiting time becomes our cruise speed. Even after finding the maximum concurrency, we keep monitoring the request cohorts and adjust the concurrency to reach our target. Remote host underperforming? Reduce the concurrency to give it some breathing room. Back to health? Speed up. The fluctuations are monitored very closely and a minimal slowdown causes the speed to reduce. It does so long before the disruption would be perceptible to visitors of a website.

inadvertently exposed files can result in serious data loss without a sophisticated attack.

The end result means faster scanning than when setting a conservative concurrency limit, and never having to worry about taking down a host for being too aggressive. Configuration-free...autonomous...no human intervention... just like we like it at Delve.

But don't worry, if you were just using Tachyon to test your web server configuration and get your DoS tool back, just add --concurrency=100.

With the new release, finding those inadvertently exposed files doesn't have to risk server issues.

Check out the changes in HammerTime, or test it in the new releases or Tachyon or Vane2. Let us know what you think.

Most Recent Related Stories

AI in Cyber Security: Forget About It

Read More

Gold Nuggeting: A Critical Step in Vulnerability Remediation Prioritization

Read More

Just Say No...to Naive "Just Patch" Advice

Read More