Dropbox Security Flaw

Read about it here. I try not to republish content, but the pertinent bit is this:

Newton’s concept, tested on a Windows machine, uses Dropbox’s own configuration files; configuration data, file/directory listings, hashes which are stored in numerous SQLite database files located in %APPDATA%\Dropbox. Inside one file lies a database row containing a users “host_id”, which is used to authenticate each individual user.
Modifying this file and changing the host_id to that of another Dropbox user automatically authenticates the account, providing complete access to that person Dropbox until the user realises that there is a new computer in the “Linked Devices” section of the Dropbox website.

As you should Dropbox or no, encrypt sensitive data with an out of band key (password/phrase/yubikey/token).

Personally, I agree with Dropbox’s statement that if an attacker was able to gain access to your local files, that gaining access to the dropbox’d files is already a lost battle. However, gaining access to the dropbox account without a password is where I have issue. In either regard, I will continue to promote dropbox as the best cloud based replicator out there.

DoS Madness! Part 1: Memory Management / Behind Your Firewall

Alright, so here’s the bottom line. Your site is down. It’s your job not only to fix it this time, but also, to keep this kind of stuff from happening. Well, a DoS Attack in any of its varieties will rely on a single tenet: your service (www.yoursite.com) relies on a successive chain of processes and devices to generate, package, and deliver its content to your clients. A DoS attack is, in practice, any sort of attack that exploits one or more of these subsystem’s vulnerability to being overwhelmed. It then follows, that to reduce the potentiality for a successful DoS attack, one must reduce the ways in which your site’s subsystems will be overwhelmed.

With that thought in mind, let’s take a look at a couple different DoS attacks and the subsystem they target. The next series of articles will break down DoS types by the general subsystem they exploit.

Part 1: Memory Management / Behind Your Firewall

One type of DoS attack that takes advantage of poorly configured web servers is slowloris. This attack opens as many connections as your web server can handle, then keeps the connections open, with occasional

slowloris: hey.
webserver: what?
slowloris: ...
webserver: ...
webserver: are you still there?
slowloris: hey...
webserver: what?
slowloris: ...


Slowloris can be mitigated by rate limiting incoming connections to Apache and / or using a non-vulnerable front end web server, such as nginx.

Another tool that can be used maliciously is Google’s skipfish. This is a security scanner that can double as a load tester. Basically skipfish will form URLs based off of crawled data and dictionary based guesses. Used maliciously, it can overwhelm a poorly deployed LAMP stack. These servers may run out of memory and crash.

When it does, there may be an OOM_Killer message in your /var/log/messages file as a final, “OMG-help-me dump” right before reboot. Memory is a scarce resource in a system, and to improve performance, the Linux kernel will overcommit RAM to the numerous processes that request an allocation of memory space. Read more about the Linux kernel’s overcommit behavior here. In summary, it’s kind of like simultaneously sitting at five different $5 dollar black jack games with $20 in chips. Most times, a couple games will cash out, releasing chips back into the resource pool, and there is no net loss. The kernel runs faster, and everybody wins. That is, until traffic increases beyond what the system can handle, and everybody loses. When this happens, the kernel has to run from at least one table (and an angry pit boss). OOM_Killer is a program that was designed to handle exceptions to a linux kernel’s overcommit behavior and uses algorithms to determine which process to kill.

A particularly amusing analogy by Andries Brouwer describes OOM_Killer thus:

An aircraft company discovered that it was cheaper to fly its planes with less fuel on board. The planes would be lighter and use less fuel and money was saved. On rare occasions however the amount of fuel was insufficient, and the plane would crash. This problem was solved by the engineers of the company by the development of a special OOF (out-of-fuel) mechanism. In emergency cases a passenger was selected and thrown out of the plane. (When necessary, the procedure was repeated.) A large body of theory was developed and many publications were devoted to the problem of properly selecting the victim to be ejected. Should the victim be chosen at random? Or should one choose the heaviest person? Or the oldest? Should passengers pay in order not to be ejected, so that the victim would be the poorest on board? And if for example the heaviest person was chosen, should there be a special exception in case that was the pilot? Should first class passengers be exempted? Now that the OOF mechanism existed, it would be activated every now and then, and eject passengers even when there was no fuel shortage. The engineers are still studying precisely how this malfunction is caused.

So if you went to the win.tue.nl link I showed you, you’ll note that like a scared gambler, the kernel (after 2.5.30) can be instructed to be more reserved when allocating memory. The values in the below files will moderate this behavior.

echo 2 > /proc/sys/vm/overcommit_memory #this will tell the kernel to never commit memory over the swap space and a certain fraction of physical memory.*
echo 0 > /proc/sys/vm/overcommit_memory #(default) the kernel will do what it thinks is best
echo 1 > /proc/sys/vm/overcommit_memory #this will tell the kernel to go hog-wild and never refuse an app memory

*This fraction is defined in “/proc/sys/vm/overcommit_ratio” – the default value is 50 (=50%

But that’s not the cure-all to memory management. That just dictates kernel behavior. App behavior is another story. Here, you have to ensure your app never asks for more memory than your system can afford to give out. Now this is going to differ based on what type of server it is, but the most commonly exploited servers (from my perspective anyways) are LAMP stacks. And of the LAMP stack, the two most common reasons why OOM_Killer is invoked is:

  • Too many Apache worker processes. Take a look at your MaxClients declaration in your httpd.conf file. Now, looking at the MPM (by default, it’s usually prefork, which means 1 thread per process), multiply the number of processes for your MaxClients by the average size of your Apache processes, and you’ll have the total memory allocation for Apache at full load. Naturally, you do not want this figure to be anywhere close to your physical RAM limit….

  • …especially when your website is database driven. Generally speaking, more traffic means more db time, which means more RAM consumption alongside the web server. Be sure to tune your MySQL installation to handle the number of queries that Apache can present at load. There are a number of parameters that will affect the way the db uses memory. Tune both this and Apache to not exceed the amount of memory you have available to your system.

  • For more on LAMP stack tuning, please refer to the following links:

    Finally, you may wish to consider using a lighter web server, or a reverse proxy cacher to optimize performance. The bottom line is, when faced against an array of incoming attempts to overwhelm and disable your backend applications, you want your system to be robust enough to gracefully handle what it can, and redirect or ignore the rest.

    In Part 2: Socket Management / Behind Your NIC, we’ll be looking at SYN Floods and other, non-port saturating attacks. Now as this series progresses, there will be some overlap in mitigation techniques that will impact more than one type of DoS attack. As long as these correlations are mutually beneficial, we’ll be just hunky-dory.

    Data Fail: Sidekick Phones

    The Microsoft data store where T-Mobile Sidekick phones save their user data, such as contact info and pictures, has been reported to have been lost beyond repair.

    On October 3, T-Mobile Chief Operations Officer, Jim Alling wrote the following post on the T-Mobile forum site:

    Dear valued T-Mobile Sidekick customers:

    I realize that for many of you, your T-Mobile Sidekick is how you stay in touch with your friends, family and others.  I sincerely apologize for the impact the current disruption of data services may be having on you.  I assure you that T-Mobile is working very closely with Danger/Microsoft to resolve the issue as quickly as possible.  T-Mobile-supported services, such as voice calls and SMS/MMS, have not been affected and continue to be operational.  Danger/Microsoft has been working, and will continue working through the week, to restore data functionality and other features.

    I understand that this data service disruption is very frustrating to our valued Sidekick customers.  For many years, the Sidekick has been, and continues to be, a cornerstone device for T-Mobile.  And we believe Sidekick customers are among the most loyal customers anywhere.  Recognizing that, and to address any inconvenience Sidekick data customers are experiencing, T-Mobile will automatically credit one month of data service to customers who subscribe to T-Mobile Sidekick data plans.  There is nothing you need to do to get this credit – T-Mobile will post the credit to these accounts in the coming days.

    We will continue to post the latest information and FAQs to these Forums. I appreciate you being a loyal T-Mobile customer, and appreciate your patience as everyone works hard to resolve the current issues.  Thank you.


    Jim Alling, Chief Operations Officer, T-Mobile USA

    Then, after a torrent of discussion on the forum site, the following update was provided earlier today:

    Dear valued T-Mobile Sidekick customers:

    We are thankful for your continued patience as Microsoft/Danger continues to work on preserving platform stability and restoring all services for our Sidekick customers.  We have made significant progress this past weekend, restoring services to virtually every customer.  Microsoft/Danger has teams of experts in place who are working around-the-clock to ensure this stability is maintained.

    Regarding those of you who have lost personal content, T-Mobile and Microsoft/Danger continue to do all we can to recover and return any lost information.  Recent efforts indicate the prospects of recovering some lost content may now be possible.  We will continue to keep you updated on this front; we know how important this is to you.

    In the event certain customers have experienced a significant and permanent loss of personal content, T-Mobile will be sending these customers a $100 customer appreciation card.  This will be in addition to the free month of data service that already went to Sidekick data customers.  This card can be used towards T-Mobile products and services, or a customer’s T-Mobile bill.  For those who fall into this category, details will be sent out in the next 14 days – there is no action needed on the part of these customers.  We however remain hopeful that for the majority of our customers, personal content can be recovered.
    Moderator, T-Mobile Forums

    At this time, neither Microsoft nor T-Mobile have confirmed conjecture that a SAN update caused the failure:

    So yeah..

    I would like to know what discounts are T-mobile going to give on a new Phone. I am probably going to move to the Moto Cliq, But I and other sidekick users should get a full phone discount not just a % of it..  (Microsoft should pay for it)

    hmm Roz Ho haven’t you her of BACKUP…?

    Quoting Hiptop3

    Currently the rumor with the most weight is as follows:

    Microsoft was upgrading their SAN (Storage Area Network aka the thing that stores all your data) and had hired Hitachi to come in and do it for them. Typically in an upgrade like this, you are expected to make backups of your SAN before the upgrade happens. Microsoft failed to make these backups for some reason. We’re not sure if it was because of the amount of data that would be required, if they didn’t have time to do it, or if they simply forgot. Regardless of why, Microsoft should know better. So Hitachi worked on upgrading the SAN and something went wrong, resulting in it’s destruction. Currently the plan is to try to get the devices that still have personal data on them to sync back to the servers and at least keep the data that users have on their device saved.


    Microsoft Do you understand that you are making yourself and T-mobile loose MONEY????

    Also with me being a Sidekick owner I feel betrayed by Microsoft not T-mobile.

    This outage I was all fine about at first but now it is just to much. We sidekick owners rely on Danger witch is now owned by Micro to keep are data stored on a secure server and that is why us users never backed up are data. I mean the sidekick does not even have a mass contact save Option. The user has to save them one by one. If I do stay with the sidekick I would like to see Options to save all on SD becuase a SIM can only hold around 250..

    I have lost business and meetings from this outage and I am not happy.

    So to everyone

    It is not T-mobiles Fault so do not blame them. There customer service has been AWESOME

    Also Danger and Microsoft do not comunicate with T-mobile as much that is why there is not much info.

    “I wonder if we call Microsoft and bug them will they give us any info, they will probably say u have to call t-mobile. Well T-mobile is not the one who messed up,.they do not UPDATE THE SAN…..”

    After a week of attempting to salvage the data, it would appear as though Microsoft was unsuccessful in doing so. If the SAN speculation is correct, then it was simply a failure of the data’s underlying SAN. The question is, why should a failing SAN bring with it the data of an entire customer base? I severely doubt that this would have occurred had this been a normal hardware breakdown. Well-designed storage solutions are built with the precondition of being able to survive a head failure, network failure, any sort of failure, really, without losing data. One would thus speculate that gross human error was at fault, and frankly, that means that management was not doing their job. Not enough layers of redundancy were built into this system, and not enough protective layers were written into policy to prevent this human error, or whatever it was, from cascading into a data-lost scenario. Data management is a big responsibility, and not enough resources go into its upkeep in many firms. It would thus appear that Microsoft appears to be one of the latter.

    Slowloris and You

    UPDATE: 20090826 – Corrected typo in “Slowloris and You.” It used to say “Slowlaris and You.” I keep getting slowloris confused with my nickname for “Solaris.” =D

    Back in July, http://ha.ckers.org/slowloris/ published an exploit against Apache and other web servers (go to the link for further) that takes advantage of multi-threaded applications. It works by tying up web server threads with partial HTTP requests, then sends TCP handshakes to keep the socket open. In general, multi-threaded web servers such as httpd, apache, and apache2 are vulnerable. IIS and most proxies are not vulnerable

    suggested using iptables to rate limit incoming port 80 requests. In general, this should be fine for many applications, though CERT has warned that some large clients behind NAT’s may be affected and thus the hitcount/time ratio should be adjusted according to your needs.

    offers tips on mitigating this attack by enabling delayed binding on hardware load balancers.

    In short, it appears as though the consensus mitigation method involves connection restrictions in the form of iptables or apache modules (most are of limited value, frankly), or shielding the web servers behind load balancers (such as HA-Proxy).

    Conficker Update Part 3

    According to http://forum.drweb.com/index.php?showtopic=277240 , Win32.HLLW.Shadow.based is a a variant of Conficker/downadup.

    Symptom: Every available port from 1024-5000 is used to connect to various servers on destination port 445. Basically, the worm opens these connections to download and wait for malicious binaries.

    The removal tools at http://www.bdtools.net/ does not detect this variant, and you have to use Dr.Web’s Cureit to detect and remove it. According to them, the recommended procedure is to install the following hotfixes:
    * MS08-067

    * MS08-068

    * MS09-001

    And then run Cureit, a fully functional shareware app.

    In case you’re reading this from an infected server, I’ve downloaded and included some of these files here (because if you’re infected, you won’t be able to access certain sites, drweb.com being one).

    Next Page »