April 21, 2016

Password brute-forcing exercise

(Blog post corrected as the WPA/WPA2 times were not correct for CPU)

I had some time to experiment with password cracking related stuff, looking mainly at brute-force performance. Password cracking can be a very interesting area where using dictionaries, rainbow tables, permutations, and different optimizations can yield good results, but this time I aimed at the good old brute-forcing.

I tested two different CPUs with Hashcat and one GPU with oclHashcat tools, using a 8-character password with different character sets, and few algorithms. Other tools may provide better performance, but I haven't looked much into the area of cracking. The estimate provided by the tools was used. I have also noticed that a "typical" Finnish ISP ADSL-box has a 10-character hexadecimal WPA/WPA2 password pre-set for wireless, so I wanted to get an estimate how hard cracking that kind of password would be.

Based on the test a simple lower alpha password (MD5/SHA1) is quickly cracked in a few hours even with a slower CPU. Adding capital letters and numbers in the MD5 password increases the cracking time considerably, taking over 2 months with the slower and a bit under a month with the faster CPU.

When adding special characters into the 8-character password (hashed with MD5) increases the cracking time into years with CPU-based cracking. However, it is a different story with GPU. I was very impressed with the consumer-grade GPU performance, as it would have taken only 5-6 hours to crack the 8-character alphanumeric password. The password requiring special characters would have been done in around seven days.

Of course keeping systems running at full utilization for a prolonged period would require some serious cooling for the rig. I'm also not sure about the stability of GPU cracking, as trying to run the benchmark some video glitches was encountered and I was basically forced to boot the system.

The WPA cracking part was interesting. According to the tools itself, it would take almost 20 years to crack the password with the slower CPU and about 7 years with the faster CPU. With the GPU the time was a little over two months. This is because the computation is expensive.

Mitigating factors for online services and such is using computationally expensive algorithms and/or adding a reasonable salt to the password. This way it is possible to increase the time required for cracking the password, which makes it infeasible for under-resourced attackers. Use long and complex passphrases (over 40- chars) for your network access points and preferably also passphrases instead passwords if possible. Do not re-use them across different services.

If we think about a well-resourced attacker that has tried all the easy methods first and failing, what would it try to do? Direct network exploitation of services has failed, as has attempts to spear-phish and use watering hole attacks etc. It could attack the company with physical means, tailgating etc, but also through an employee. If the employee is cautious enough to not fall for external network-based and social-engineering attacks, how about close proximity attacks?

What I mean is that employees could be monitored and attacked: do the house they live in provide specific ISP connection, what kind of settings are the devices shipped with to the customers, which access point(s) is the strongest at the employee's apartment? Perform a quick disassociation attack and force client to re-associate with the AP and get the WPA/WPA2 handshake. Maybe they haven't bothered to change the provided password as the systems needs to be reset once in a while?

Get the cracking running based on acquired data on a 4xGPU rig, which would decrease the required time down to 16.5 days. Or maybe it is feasible to build a software-based solution that allows distributing the brute-force cracking effort between 4 different rigs each having four GPUs? That would decrease the brute-force cracking time to four days. The cost would be peanuts for a serious adversary, who probably has even better computing capacity available. Then use the key to access the employee private network, inject malicious code to gain foothold and eventually gain access to employer systems.

Who knows what kind of systems some serious nation-state adversaries in reality have? But anyways, enough with the tinfoil hat. Below are some times provided by the tools (GTX was run with the oclHashcat):

MD5, alpha (a-z) 8 characters
i3-550   @ 3.2GHz  : 31.80M words/s,  1 hours 48 min
i7-4790K @ 4.0GHz  : 97.32M words/s, 35 min
GTX980,  @ 1278Mhz : 10989M words/s, 20 sec

SHA1, alpha (a-z) 8 characters
i3-550   @ 3.2GHz  : 19.93M words/s,  2 hours 54 min
i7-4790K @ 4.0GHz  : 59.53M words/s, 58 min
GTX980,  @ 1278Mhz :  3305M words/s, 51 sec

MD5, alphanum (a-z,A-Z,0-9) 8 characters (typical app requirement)
i3-550   @ 3.2GHz  :  33.37M words/s, 77 days
i7-4790K @ 4.0GHz  : 102.74M words/s, 25 days
GTX980,  @ 1278Mhz :  10801M words/s,  5 hours, 39min

MD5, alphanum + special (a-z,A-Z,0-9, !"#¤...) 8 characters
i3-550   @ 3.2GHz  :  32.80M words/s, 6 years 4 months
i7-4790K @ 4.0GHz  : 103.32M words/s, 2 years
GTX980,  @ 1278Mhz :  10801M words/s, 7 days

WPA/WPA2, 10 character hex-password
i3-550   @  3.2GHz :  1.77k/s, 20 years
i7-4790K @  4.0GHz :  4.72k/s,  7 years
GTX980,  @ 1278MHz : 193.4k/s, 66 days

April 25, 2015

Share the Threat Intelligence?

I've read through some reports and one was Verizon's DBIR 2015. There was stated something I've seen earlier in a Defcon talk, that threat intelligence feeds have minimal overlap. The problem this poses for an organization is that it would need to ingest a lot of feeds for proper coverage and managing these would be a difficult task. It was also mentioned that the IP/domain/URL/etc indicators are short-lived, which was also kind of expected.

Here we get to the point that without proper mechanism of sharing the threat intelligence feeds we do not get anywhere. There are great research done out there by many that publish regularly very interesting stuff with detailed information ranging from IP-addresses to specific files and so on. But the problem is that this is dispersed over the Internet and typically found scattered inside a lengthy blog post. In addition there are many parties that offer paid or free feed subscriptions but as stated in the beginning, there are so many of them that should be in scope for being useful.

There are multiple of these "silos" where some parties have teamed up and share intel between each other, but that typically requires using one of their products and yet again has the problem of coverage not being that good. I yet again drum for global threat intelligence sharing and try to outline a system that might work for the benefit for all. I'd be happy if this stirs some conversation.

I believe that one centralized system which is planned and built by the security industry would be the solution. It would have specific interfaces for authenticated parties to insert different type of threat intel data into the system: files, registry keys, domains, IP-addresses, URLs, categorized properly e.g. C2, scanning hosts, affected industry and so on. The system would automatically export this data into different usable formats, e.g. iptables/web-proxy/dns/ids and so on.

This data would then be retrievable for organizations based on their preferences, organizations first make sanity checks to the data before applying it to different technologies. Sanity check like "am I in the feed data" would serve a detective purpose. Thus there would be a way to provide threat intelligence feeds with proper coverage and proper co-operation.

For all "fairness" there could be a lead time for a security company's customers where they can benefit from the data earlier by direct update, say 6-8 hours. Then it might also be fair that the centralized site offers statistics on what type of intel a security company has provided, which allow a customer some judgment on how much they do for the security industry. I don't know, this is a bit difficult area. Perhaps this kind of solution can't be free but at least it would have good coverage!

Any thoughts? Am I being insane again?

March 10, 2015

Security news, globally?

It has been a very long time since I have posted anything. I have totally forgotten about those mindmaps, but have to see if I some day manage to get my brain around them again. This time I will not promise anything! This post, on the other hand, is a brief peek into what I'm thinking currently.

Many things I read about are related to same events from a little different angle if not exactly the same post or tweet. It is kind of boring. I've also noticed that many things seem to be related to US. It is like everything happens there. It must partially be a language thing, I hope.

For example in Finland there rarely seems to happen anything, you read in Finnish mainly about the same things you've already read about in english. If something happens, it is like the world and dog hits the fan.

This situation kind of sucks. Of course a lot of different online services are US-based, these get targeted and people write about them, but something must happen also outside US. Is it that other countries do not tend to write about security and if they do there is nobody writing it in english?

I have been thinking about collecting a wordlist of different security-related keywords for different languages and using these to gather news from different regions using Google or local search engines, Twitter posts and so on.

The problem is that these would have to be translated. I think currently the best job is done by Google and only way I see this being possible is to ID-tag collected items/topics to be translated and throw one language "pack" a time at Google for translation.

Does anyone else have opinions on how to stay current with security news globally? I'm not going into the other needed functions for such news reading, like removing duplicates, but mainly aim to get more visibility.