Join us at Etsy’s Code as Craft

We’re excited to announce that Sophia D’Antoine will be the next featured speaker at Etsy’s Code as Craft series on Wednesday, February 10th from 6:30-8pm in NYC.

What is Code as Craft?

Etsy Code as Craft events are a semi-monthly series of guest speakers who explore a technical topic or computing trend, sharing both conceptual ideas and practical advice. All talks will take place at the Etsy Labs on the 7th floor at 55 Washington Street in beautiful Brooklyn (Suite 712). Come see an awesome speaker and take a whirl in our custom photo booth. We hope to see you at an upcoming event!

In her talk, Sophia will discuss the latest in iOS security and the cross-section between this topic and compiler theory. She will discuss one of our ongoing projects, MAST, a mobile application security toolkit for iOS, which we discussed on this blog last year. Since then, we’ve continued to work on it, added new features, and transitioned it from a proof-of-concept DARPA project to a full-fledged mobile app protection suite.

What’s the talk about?

iOS applications have become an increasingly popular targets for hackers, reverse engineers, and software pirates. In this presentation, we discuss the current state of iOS attacks, review available security APIs, and reveal why they are not enough to defend against known threats. For high-risk applications, novel protections that go beyond those offered by Apple are required. As a solution, we discuss the design of the Mobile Application Security Toolkit (MAST) which ties together jailbreak detection, anti-debugging, and anti-reversing in LLVM to address these risks.

We hope to see you there. If you’re interested in attending, follow this link to register. MAST is still a beta product, so if you’re interested in using it on your own iOS applications after seeing this talk, contact us directly.

Software Security Ideas Ahead of Their Time

Every good security researcher has a well-curated list of blogs they subscribe to. At Trail of Bits, given our interest in software security and its intersections with programming languages, one of our favorites is The Programming Language Enthusiast by Michael Hicks.

Our primary activity is to describe and discuss research about — and the practical development and use of — programming languages and programming tools (PLPT). PLPT is a core area of computer science that bridges high-level algorithms/designs and their executable implementations. It is a field that has deep roots in mathematical logic and the theory of computation but also produces practical compilers and analysis tools.

One of our employees and PhD student at UMD, Andrew Ruef, has written a guest blog post for the PL Enthusiast on the topic of software security ideas that were ahead of their time.

As researchers, we are often asked to look into a crystal ball. We try to anticipate future problems so that work we begin now will address problems before they become acute. Sometimes, a researcher foresees a problem and its possible solution, but chooses not to pursue it. In a sense, she has found, and discarded, an idea ahead of its time.

Recently, a friend of Andrew’s pointed him to a 20-year-old email exchange on the “firewalls” mailing list that blithely suggests, and discards, problems and solutions that are now quite relevant, and on the cutting edge of software security research. The situation is both entertaining and instructive, especially in that the ideas are quite squarely in the domain of programming languages research, but were not considered by PL researchers at the time (as far as we know).

Read on for a deep dive into the firewalls listserv from 1995, prior to the publication of Smashing the Stack for Fun and Profit, as a few casual observers correctly anticipate the next 20 years of software security researchers.

If you enjoyed Andrew’s post on the PL Enthusiast, we recommend a few others that touch upon software security:

Hacking for Charity: Automated Bug-finding in LibOTR

At the end of last year, we had some free time to explore new and interesting uses of the automated bug-finding technology we developed for the DARPA Cyber Grand Challenge. While the rest of the competitors are quietly preparing for the CGC Final Event, we can entertain you with tales of running our bug-finding tools against real Linux applications.

Like many good stories, this one starts with a bet:

image01

On November 4, 2014, Thomas Ptacek (of Starfighter) bet Matthew Green (of Johns Hopkins) that libotr, a popular library used in secure messaging software, would have a high severity (e.g. remote code execution, information disclosure) bug in the next 12 months. Here at Trail of Bits, we like a good wager, especially when the proceeds go to charity. And we just happened to have an automated bug-finding system laying around, itching for something to do. The temptation was too much to resist: we decided to use our automated bug-finding system from the Cyber Grand Challenge to look for bugs in libotr.

Before we go on, we should state that this was not a security audit. We simply wanted to test how well our automated bug-finding system works on real Linux software and maybe win some money for charity.

We successfully enhanced our bug-finding system to support the libotr library and tested it extensively. Our system confirmed that there were no critical bugs in code paths that we tested; since no one else reported any bugs, the bet ended with Matthew Green donating $1000 to Partners in Health.

Read on to discover the challenges encrypted communications systems present for automated testing, how we solved them, and our testing methodology. Of course, just because our system didn’t find bugs in libotr does not mean that libotr is bug-free.

Background

The automated bug-finding system, known as a Cyber Reasoning System (CRS), that we built for the Cyber Grand Challenge operates on binary code for the DECREE operating system. While DECREE is based on Linux, it differs considerably from plain Linux. DECREE has no signals, no shared memory, no threads, no sockets, no files, and only seven system calls. This means that DECREE is not binary or source compatible with Linux libraries like libotr.

After weighing our options, we decided the easiest and fastest way to test libotr was to port it to DECREE, instead of adding full Linux support to our CRS. We attempted the port in a generic manner, to ensure we could use the lessons learned to test future Linux software.

To port libotr, we had to solve two major issues: shared library dependencies (libotr depends on libgpgerror and libgcrypt) and libc support. We used LLVM to solve both problems at once. First, we used whole-program-llvm to compile libotr and all dependencies to LLVM bitcode. We then merged all the shared libraries at the bitcode level, and aggressively optimized the resulting bitcode. In one move, we eliminated the need for shared libraries, and drastically reduced the amount of libc we’d have to implement, because unused libc calls were optimized out of the resulting bitcode. To build a libc that works on DECREE, we combined libc implementations from the challenge binaries, stubbed functions that don’t make sense in DECREE, and created new implementations based on DECREE calls where appropriate.

Automated Testing

Encrypted communications applications are, by design, difficult to automatically audit. This makes perfect sense: if an automated system can reason how ciphertext relates to plaintext, the encrypted communication system is already broken. These systems are also difficult to audit by random testing (e.g. fuzzing), because recipients will verify the integrity of every message. Typically when testing encrypted systems, the encryption is turned off (or data is manipulated prior to encryption or after decryption). We wanted to simulate testing a black-box binary, so we did not modify libotr in any way. Instead, we thought the best path was to make our CRS simulate a man-in-the-middle (MITM) attack. Because we tested an unmodified libotr, our CRS could not effectively attack code past message integrity checks. However, there was still much in the way of attack surface: message control data, headers, and possibility of flaws in decryption/authentication code. The problem was that our CRS was not designed to MITM. We instead architected the test application (not libotr) to be easier to attack, which results in the convoluted architecture below.

image02

The CRS acts as a man-in-the-middle between two applications communicating using libotr.

Creating the test application was more difficult than porting libotr to DECREE. The porting process was fairly straightforward and took about two weeks. The sample application took a bit longer, and was a much more frustrating experience: the official libotr distribution has no sample code, and the documentation leaves a lot to be desired.

Our testing was limited by the features of libotr exercised by our sample application (for instance, it doesn’t use SMP), and by the unusual test application we created. Additionally, some vulnerabilities may only occur after decryption, and modification of encrypted and authenticated data will never trigger these bugs.

Results

The results of testing libotr are very encouraging. We ran 48 Xeon CPUs for 24 hours against our libotr sample application, and did not identify any memory safety violations.

image00

The CRS acts as a man-in-the-middle between two applications communicating using libotr.

This negative result does not mean that libotr is bug free. We only tested a subset of libotr, and there are considerable parts that our CRS never audited. The lack of obvious bugs is however a very good sign.

Conclusion

The timeframe of the libotr bet has expired without any reported high severity vulnerabilities. We audited parts of libotr with our automated bug-finding tools, and also didn’t find memory corruption vulnerabilities. In the process of setting up this test, we learned how to port Linux applications to DECREE and verified that our CRS can identify real bugs in Linux programs. Better documentation, tests, and sample applications that exercise every libotr feature would simplify both automated and manual auditing. For this experiment we constrained ourselves to an unmodified libotr. We are planning a future test where we modify libotr to enable easier automated testing.

2015 In Review

Now that the new year is upon us, we can look back and take assessment of 2015. The past year saw Trail of Bits continuing our prior work, such as automated vulnerability discovery and remediation, and branching out into new areas, like secure self-hosted video chat. We also increased our community outreach: we advocated against reactionary regulation, supported security-related non-profits, hosted a bi-monthly security meetup in NYC, and more. Here are just some of the ways we helped improve the state of security and privacy in 2015.

Participated In DARPA’s Cyber Grand Challenge

Find and patch the vulnerabilities in 131 purposely built insecure programs. In 24 hours. Without human intervention. That was the challenge we entered our Cyber Reasoning System (CRS) into. Despite some issues with patching performance, we are very proud of the results; our system identified vulnerabilities in 65 of those programs and rewrote 94 of them to eliminate the bugs. In the coming year we’ll be focusing on adapting our CRS to find and patch vulnerabilities in real software automatically.

Advocated Against Reactionary Regulation

As worrisome as online attacks are today, we find hasty government regulation just as unsettling. Some proposed expansions to the Wassenaar Arrangement would hamper the U.S. cybersecurity industry. That’s why we immediately endorsed the Coalition for Responsible Cybersecurity’s mission to ensure that U.S. export control regulations do not negatively impact U.S. cybersecurity effectiveness. See our comments to the Bureau of Industry and Standards.

Contributed To Cyber Security Awareness Week (CSAW)

CSAW holds a special place in our hearts. Many of our team, from the founders to our newest hires, honed their skills on past years’ challenges. This year, we contributed five CTF challenges for the qualifying round: wyvern, bricks of gold, sharpturn, punchout, and “Math aside, we’re all black hats now.” (For teams willing to post helpful writeups, we passed out some stylish Trail of Bits attire.) Finally, we helped to shape the policy competition, which challenged participants to explore the possibility of a national bug bounty.

Added 64-bit Support To mcsema

Trail of Bits’ mcsema is an open-source framework for translating x86 and now x86-64 binaries into LLVM bitcode. It enables existing LLVM-based program analysis tools to operate on binary-only software. When we open sourced mcsema, we were hoping the community would respond with fixes, high quality contributions, and bug reports. Our hopes came to fruition when we received an open source contribution to support translation of x86-64 binaries. Many modern applications are compiled for 64-bit architectures like x86-64; and now mcsema can start translating them. We hope to see many more contributions in the new year.

Launched Bi-Monthly Meetup, Empire Hacking

We created Empire Hacking to serve as a space where the security research community could come together to freely share ideas and discuss the latest developments in security research. Empire Hacking happens bi-monthly in NYC and features talks on current topics in computer security. We are always looking for speakers (a great way to get feedback on your talk and distill your thoughts). Everyone, even journalists, are welcome. Empire Hacking is a free event. If you’d like to attend, please apply on our meetup.com page.

Published First-Ever Guide For Securing Google Apps

More than five million companies rely on Google Apps to run their critical business functions, like email, document storage, calendaring, and chat. In the wake of the OPM incident, we shared our top recommendations for small businesses who want avoid the worst security problems while expending minimal effort. These are the essential practices that every small business should follow if they use Google Apps.

Trained Ruby Developers

Vast, lucrative swathes of the Internet were exposed to attackers when vulnerabilities were discovered in features and common idioms in Ruby. While nearly all large, tested and trusted open-source Ruby projects contained some of these vulnerabilities, few developers were aware of the risks. So, we published our RubySec Field Guide.

Hosted An Awesome Intern Who Made The Internet Safer

After she impressed us in the CTF challenges at CSAW 2014, we offered Loren a summer internship. As a self-starter and a quick study, she uncovered and reported vulnerabilities using american fuzzy lop and Microsoft MiniFuzz, found bugs in an NYC tech startup’s software, and presented her findings in a meeting with the company. We’re glad to have her back for her senior year of high school. She’ll be an asset to any college that’s lucky enough to have her.

Dragged The CTF Community Closer To Windows Expertise

Despite Windows being such an important part of our industry, American CTFs don’t release Windows-based challenges. They all come from Russia. This needs to change. The next crop of security researchers needs more Windows-based challenges and competitions. That’s why we released AppJailLauncher, a framework for making exploitable Windows challenges, keeping everything secure from griefers, and isolating a Windows TCP service from the rest of the operating system.

Lit Up The Flare-On Challenges

From simple password crack-mes to kernel drivers to steganography in images, FireEye’s second annual Flare-On Challenge had something for everyone (that is, if you were a reverse engineer, malware analyst, or security professional). Their eleven challenges encompassed an array of anti-reversing techniques and formats. We wrote up the four challenges that we took on (six, seven, nine, and eleven), as well the more useful tools and techniques that might help in future challenges.

Opened Sourced Our Self-Hosted Video Chat

‘Tuber’ is everything your team needs for secure video chat. It touts all the standard features you expect from Google Hangouts -like buttons to mute audio and turn off video selectively- and it’s engineered to work flawlessly on a corporate LAN with low latency and CPU usage. If you need video conferencing that doesn’t rely on any third-party services, you should check out Tuber.

Financially Supported Let’s Encrypt

We sponsored Let’s Encrypt, the free, automated, and open Certificate Authority (CA) that went into public beta on December 3. With so much room for improvement in the CA space, Let’s Encrypt offers a refreshing, promising vision of encrypting the web. We believe this will significantly improve HTTPS adoption, ensuring everyone benefits from a more secure Internet. That’s precisely why we’re supporting this initiative with a large (for us) donation and we hope you’ll join us in sponsoring Let’s Encrypt.

Sponsored Six Academic Events

We are proud of our roots in academia and research, and we believe it’s important to promote cybersecurity education for all students. This year, we sponsored and contributed to these events that sought to motivate and educate students of every academic level:

Looking Ahead

We have many exciting things planned for 2016. More of our automated vulnerability discovery and remediation technology is going to be open sourced. Ryan Stortz will be speaking at INFILTRATE 2016 on Swift reverse engineering, and his talk will be complemented with a blog post and whitepaper. We will also be releasing a new specialized fuzzer that we have used on several engagements. To continue community outreach, we will host an LLVM hackathon to create new program analysis tools and contribute changes back to the LLVM project. And last but not least, expect a makeover of the Trail of Bits website.

Let’s Encrypt the Internet

We’re excited to announce our financial support for Let’s Encrypt, the open, automated and free SSL Certificate Authority (CA) that went into public beta on December 3. With so much room for improvement in the CA space, Let’s Encrypt offers a refreshing, promising vision of encrypting the web.

Let's Encrypt is an open, automated, and free SSL Certificate Authority

Let’s Encrypt is an open, automated, and free SSL Certificate Authority

Expensive SSL certificates are holding back Internet security by making it difficult to enable HTTPS by default on all sites. The Federal CIO Council underscores the importance of widespread HTTPS deployment:

By always using HTTPS, web services don’t have to make a subjective judgment call about what [data is] sensitive. This leaves less room for error, and makes deployment simpler and more consistent. These changed expectations improve the security of HTTPS on every website. In other words, protecting less sensitive sites strengthens the protections of more sensitive sites.

We believe Let’s Encrypt will significantly improve HTTPS adoption, ensuring everyone benefits from a more secure Internet. That’s precisely why we’re supporting this initiative with a large (for us) donation and we hope you’ll join us in sponsoring Let’s Encrypt.

Let’s Encrypt should make the usual headaches of generating, installing, and updating SSL certificates a thing of the past. During the beta period, you can get a SSL certificate with a few simple steps; we expect major web hosting providers to soon offer seamless Let’s Encrypt integration. In addition to solving the problem of HTTPS adoption, Let’s Encrypt plans to renew all certificates more frequently than traditional CAs. This practice will flush out inappropriate or expired certificates sooner, which will help minimize the window of opportunity for mistakes or security issues.

Traditional Certificate Authorities will still have their place, but Let’s Encrypt will allow them to focus on focus on more complex customer needs and provide higher assurances of identity and trust where needed. If you are frustrated with your current CA, we’ve had a good experience with DigiCert and recommend them as one of the better CAs in the industry.

Join us in supporting Let’s Encrypt today!

Self-Hosted Video Chat with Tuber

Today, we’re releasing the source code to our self-hosted video chat platform, Tuber Time Communications (or just “Tuber”). We’ve been using Tuber for private video calls with up to 15 members of our team over the last year or two. We want you to use it, protect your privacy, and help us make it better.

Tuber is everything your team needs for secure video chat. It touts all the standard features you expect from Google Hangouts -like buttons to mute audio and turn off video selectively- and it’s engineered to work flawlessly on a corporate LAN with low latency and CPU usage. If you need video conferencing that doesn’t rely on any third-party services, you should check out Tuber.

Built on WebRTC

Tuber takes advantage of the Web Real-Time Communications (WebRTC) protocol that’s becoming standard on modern browsers. Its client and server are written in JavaScript. That’s it. There’s no additional client software or plugins, and you don’t need to create an account to use it.

If you want to try out Tuber, you can set it up in one click with a Heroku Button. Otherwise, installation is simple and you’ll find instructions on our Github repo.

Tuber's loveable mascot, Karl the Kartoffel

Tuber’s loveable mascot, Karl the Kartoffel

Why we developed Tuber

With so many third-party options for video chat out there, why would we go to the trouble of developing our own? For the reasons you’d expect from a security-conscious company: those third-party services require user accounts, are hosted on their servers, and don’t run well inside a corporate LAN. In the process, many of them spike your CPU to 100%. And forget proprietary solutions; they’re just as likely to have bugs and vulnerabilities, and cost a whole lot more.

As a company, we’re adamant about protecting our data. We encourage everyone to use end-to-end cryptography, S/MIME, their own decentralized services, and to manage their own encryption keys when forced to use the cloud. Until Tuber, we couldn’t recommend a video chat service. So we built it.

We’re big supporters of the movement to re-decentralize the web. The over-reliance on centralized web services like video chat is a substantial part of why privacy has become such a concern today. We prefer not to depend on anyone else for our data’s security. Like the teams that built Let’s Chat, Mattermost and Zulip, we built Tuber to provide a choice.

We’ve been dogfooding Tuber for the last year. Now, we want you to try it out, use it to protect your privacy, and help us make it better. Visit our Github repo to get self-hosted video chat now.

Acknowledgements

Thanks to: Andy Ying, who led development; the whole team at Trail of Bits for their contributions; Eric Weinstein, for bringing the code up to best practices; and Dustin Webber for his early guidance.

Why we give so much to CSAW

In just a couple of weeks, tens of thousands of students and professionals from all over the world will tune in to cheer on their favorite teams in six competitions. If you’ve been following our blog for some time, you’ll know just what we’re referring to: Cyber Security Awareness Week (CSAW), the nation’s largest student-run cyber security event. Regardless of how busy we get, we always make time to contribute to the event’s success.

CSAW holds a special place in our hearts.

We are proud of our roots in academia and research, and we believe it’s important to promote cyber security education for all students. We’ve been involved in CSAW since its inception. Dan and Yan competed as students, and went on to play a central role in the early years. Since then, our employees have contributed to events, particularly CTF challenges; our favorite flavor of CSAW. (Special kudos to Ryan and Sophia for all the time and effort they’ve contributed). In fact, several of our staff competed as students before joining our team. Here’s looking at you, Sophia and Sam. Finally, we feel fortunate to have met our most recent intern, Loren, through the affiliated CSAW Summer Program for Women.

Part of what makes the CTF so great is that it incorporates diverse contributions by an array of collaborators. The resulting depth of expertise is hard to match.

This year, we contributed five CTF challenges for the qualifying round

wyvern

Participants start with an obfuscated Linux binary asking for input when run (aka a crackme). Heavy obfuscation, using varying degrees of false predicate insertion, code diffusion, and basic block splitting (all possible through LLVM) would make this a leviathan of a static-reversing challenge. Instead, participants had to pursue a dynamic approach, and program analysis tools to brute force the flag. In the process, they learn how to leak which path the program takes by monitoring changes in instruction counts, and how to use tools such as PIN, Angr, or AFL.

bricks of gold

Solution to Bricks of GoldThis challenge began with a note of international mystery: “We’ve captured an encrypted file being smuggled into the country. All we know is that they rolled their own custom CBC mode algorithm – it’s probably terrible.” Participants must successfully decrypt the file’s custom XOR-CBC encryption. That lead them to seek the algorithm, the key and the IV. Doing so required knowledge of file headers, cryptography, and brute force. Participants also learn how to examine an encrypted file for low entropy, unencrypted strings, and CBC mode block patterns.

sharpturn

Participants receive an archive of a broken git repository. They need to fix the corruption and read the files. In fact, there are three corruptions: each one is a single bit off and are all contained in individual source code files. (This actually happened to Trail of Bits.) Once repaired, the source code files compile into a binary with the answer embedded inside. Participants learn how Git blobs contain versions of repository files that have been prepended with a header and zlib compressed. Git’s versioning provides enough information to rebuild the broken commits. They must dig into the lower-level details of how Git is implemented to write a recovery program.

punchout

The story opens with three binary blobs taken from IBM System/360 punch cards, and their encrypted data. These cards were encrypted with technology and techniques from 1965, requiring participants to research how security worked in that era. They also encounter ciphers like KW-26, which generated long streams of bits and XOR’d them against the plaintext, and IBM’s use of ebcdic -not ascii- for encoding. The same stream of bits was used to encrypt each blob, and this cryptographic key reuse has a known attack. Participants attack the cipher with “cribs” in a process known as “crib dragging.”

“math aside, we’re all blackhats now”

Participants must identify the security industry consultant working for the TV show ‘Silicon Valley.’ During its first two seasons, discerning viewers noticed all kinds of props, name dropping, and references to the CTF community, with notable accuracy in its security-related plot elements. There is no way the show’s producers could have learned all these references on their own. Someone had be to feeding them inside information. Who could it be?

1,367 teams scored at least one point, which already makes the event a resounding success in our books. We’re looking forward to watching the CTF finalists duke it out in New York. If you missed the deadlines, you can always find our old CTF challenges on Github.

T-shirt bounty for writeups

For a few bribable teams willing to sTrail of Bits T-Shirthare their thought processes, we’re passing out these snazzy t-shirts for posting helpful writeups. We think it’s pretty cool to send these shirts all over the world, including England, Canada, Australia, and Singapore!

Thanks and kudos to:

bricks of gold

wyvern

sharpturn

Shaped the Policy competition

Wassenaar shone a spotlight on an array of issues we’ve been tackling for years now. We’re big supporters of the Coalition for Responsible Cybersecurity’s mission to ensure that U.S. export control regulations don’t negatively impact U.S. cybersecurity effectiveness.

So, it seemed only natural that we’d assist CSAW with its policy competition. We love the idea of the US Government hosting a bug bounty. We, as a country, could buy a lot of bugs for the billions wasted on junk security. Our topic challenged students to explore this idea and present a workable solution. We were delighted to see an exploration of this topic in the Army’s Cyber Defense Review recently.

Submissions were judged by a panel of experts in the field representing all sides of this contentious question. The top five teams will present their proposals in-person at CSAW. The top three teams will receive cash prizes and some serious attention from industry experts.

No more THREADS

After three years of running THREADS, we’ve decided to refocus our contribution to CSAW on the competitions. We hope you’ll join us in helping motivate and educate students of every academic level. (If you’re out of your school years and in New York, you might be interested in coming to our Empire Hacking meetup.)

May the best teams win.

Summer @ Trail of Bits

This summer I’ve had the incredible opportunity to work with Trail of Bits as a high school intern. In return, I am obligated to write a blog post about this internship. So without further ado, here it is.

Starting with Fuzzing


The summer kicked off with fuzzing, a technique I had heard of but had never tried. The general concept is to throw input at a program until it crashes, then analyze the crash to find a vulnerability. Because of time constraints, it didn’t make sense to write my own fuzzer, so I began looking for pre-existing fuzzers online. The first tool that I found was Cert’s Failure Observation Engine (FOE), which seemed very promising. FOE has many options that allow for precise fine-tuning of the fuzzer, so it can be tweaked specifically for the target. However, my experience with FOE was fickle. With certain targets, the tool would run once and stop, instead of running continuously (as a fuzzer should). Just wanting to get started, I decided to move on to other tools instead. I settled on american fuzzy lop (afl) for Linux and Microsoft MiniFuzz for Windows. Each had their pros and cons. Afl works best with source code, which limits the scope to open-source software (there is experimental support for closed-source binaries, however it is considerably slower). Compiling from source with afl allows the fuzzer to ascertain code coverage and provide helpful feedback in its interface. MiniFuzz is the opposite: it runs on closed-source Windows software and provides very little feedback while it runs. However, the crash data is very helpful as it gives the values of all registers at the time of the program crash — something the other fuzzers did not provide. MiniFuzz was very click and run compared to afl’s more involved compiling setup.

Examining a Crash

Once the fuzzers were set up and running on targets (Video Lan’s VLC, Wireshark, and Image Magick just to name a few) it was time to start analyzing the crashes. Afl reported several crashes in VLC. While verifying that these crashes were reproducible, I noticed that several were segfaults while trying to free the address 0x7d. This struck me as odd because the address was so small, so on a hunch I opened up the crashing input in a hex editor and searched for ‘7d’. Sure enough, deep in the file was a match: 0x0000007d. I changed this to something easily recognizable, 0x41414141, and ran the file through again. This time the segfault was on, you guessed it, 0x41414141! Encouraged by the knowledge that I could control an address in the program from a file, I set out to find the bug. This involved a long process of becoming undesirably acquainted with both gdb and the VLC source code. The bug allows for the freeing of two arbitrary, user-controlled pointers.

The Bug in Detail

VLC reads in different parts of a file as boxes, which it categorizes in a tagged union. The bug is the result of a type confusion when the size of the stsd box in the file is changed, increasing its size so that it considers the following box, an stts box, to be its child. VLC reads boxes from the file by indexing into a function table based on the type of the box and the type of its parent. But with the wrong parent, it finds no match and instead uses a default read, which reads the file in as a vide type box. Later, when freeing the box, it finds the function only by checking its own type, so it triggers the correct function. VLC tries to free an stts box that was read in as a generic vide box, and frees two address straight from the stts box.

file_diagram

CVE-2015-5949

Controlling two freed addresses is plausibly exploitable, so it was time to report the bug. I went through oCERT who were very helpful in communicating the bug with the VLC developers to fix the issue and getting a CVE assigned (CVE-2015-5949). After some back and forth it was settled, and time to move on to something new.

Switching Gears to the Web

With half a summer done and another half to learn something new, I began to explore web security. I had slightly more of a background in this from some CTFs and from NYU Hack Night, but I wanted to get a more in-depth and practical understanding. Unlike fuzzing, where it was easy to hit the ground running, web security required a bit more knowledge beforehand. I spent a week trying to learn as much as possible from The Web Application Hacker’s Handbook and the corresponding MDSec labs. Armed with a solid foundation, I put this training to good use.

Bounty Hunting

HackerOne has a directory of companies that have bug bounty programs, and this seemed the best place to start. I sorted by date joined and picked the newest companies – they probably had not been looked at much yet. Using BurpSuite, an indispensable tool, I poked through these websites looking for anything amiss. Looking through sites like ok.ru, marktplaats.nl, and united.com, I searched for vulnerable functions and security issues, and submitted a few reports. I’ve had some success, but they are still going through disclosure.

Security Assessment

To conclude the internship, I performed a security assessment of a tech startup in NYC, applying the skills I’ve acquired. I found bugs in application logic, access controls, and session management, the most severe of which was a logic flaw that posed significant financial risk to the company. I then had the opportunity to present these bugs in a meeting with the company. The report was well-received and the company is now implementing fixes.

Signing Off

This experience at Trail of Bits has been fantastic. I’ve gotten a solid foundation in application and web security, and it’s been a great place to work. I’m taking a week off to look at colleges, but I’ll be back working part time during my senior year.

Flare-On Reversing Challenges 2015

This summer FireEye’s FLARE team hosted its second annual Flare-On Challenge targeting reverse engineers, malware analysts, and security professionals. In total, there were eleven challenges, each using different anti-reversing techniques and each in different formats. For example, challenges ranged from simple password crack-mes to kernel drivers to stego in images.

This blogpost will highlight four of the eleven challenges (specifically 6, 7, 9, and 11) that we found most interesting as well as some of the more useful tools and materials that would help for future challenges like these.

Challenge Six

  • Summary: Challenge Six was an obfuscated Android App crack-me which took and verified your input
  • Techniques Used: Remote Android debugging, IDAPython

The novelty of this level was that it wasn’t a Windows binary (the majority of the challenges targeted the Windows platform; clearly looking for some Windows reversers ;] ) and it required knowledge of ARM reversing.

At the heart of this level was the ARM shared object library that contained the algorithm for checking the key. Launching the app on either your spare Android malware designated phone or emulator, we see this screen:

Screen Shot 2015-09-08 at 7.51.24 PM

Taking a stab at gambling, we try entering “password”. No luck.

Screen Shot 2015-09-08 at 7.52.32 PM

Opening it in IDA (if you did this first without running it… you’re in good company) we see that the important part of the library is the compare.

image21

Tracing this compare backwards we find the function which generates the expected input value. All we need to do is statically reverse this. The main part of this decryption function is the factorization of the encrypted password stored in the binary.

The logic from this function can be ported into Python along with the encrypted string. Using IDAPython to extract the necessary data from the binary makes this process a lot easier. For those who have never used IDAPython, the script is included below.

Main IDAPython Script

Main IDAPython Script

The above logic was exfiltrated from the obfuscated binary through static reversing. IDAPython helped with carving out the right data segments from the app.

IDAPython script to dump prime index map

IDAPython script to dump prime index map

IDAPython script to dump "rounds"

IDAPython script to dump “rounds”

Running the final Python script to decrypt the string prints the intended password.

Should_have_g0ne_to_tashi_$tation@flare-on.com

Screen Shot 2015-09-08 at 7.54.05 PM

Tangents

Aside from reversing statically, remote debugging can also be done with gdbserver.py to either attach to the app running on a phone or attach to an emulated android server.

A breakpoint can be then set at the compare and the decrypted flag read out of the debugger. To do this, extract android apk, setup android debugging environment and break at the calls to the shared, obfuscated object.

There are a few good resources online which show how to setup a remote gdb environment on android. Specifically, a few useful resources can be found at the bottom of this post.

Challenge Seven: YUSoMeta

  • Summary: Challenge 7 was an obfuscated .NET application that verified a user-supplied password.
  • Techniques Used: .NET deobfuscation, Windbg special breakpoints

Challenge 7, YUSoMeta, was a .NET Portable Executable format application. Like every good reverser, we load the .NET application into IDA Pro.

Glancing at the Functions window reveals quite a few peculiarly named methods. Many of the classes and class field names do not consist of exclusively ASCII characters (as exhibited by “normal” .NET applications). This suggests the presence of obfuscation.

image08

Opening the application in a hex editor (our particular choice is HxD), we find an interesting string: “Powered by SmartAssembly 6.9.0.114”.

image06-1

SmartAssembly is an obfuscator (much like Trail of Bits’ MAST) for .NET applications. Luckily, de4dot is a tool to deobfuscate SmartAssembly protected applications. Deobfuscated, tools such as .NET Reflector can decompile the Common Intermediate Language (CIL) assembly back into C#. Using this, we find a password verification function.

image29

The challenge captures the password obtained by user input and compares it to the expected password as generated by a series of somewhat complex operations. The easiest way to obtain the expected password is to use Windbg.

First, we setup Windbg by loading the SOS debugging extensions to introspect managed programs (aka .NET applications).

In Windbg

In Windbg

Second, we need to set up the symbols path to obtain debugging symbols.

In Windbg

In Windbg

Afterwards, we set a breakpoint on the string equality comparison function, System.String.op_Equality in mscorlib.dll. Note: we run the !name2ee twice because !name2ee always fails on the first issuance.

In Windbg

In Windbg

Upon breaking, we examine the stack objects using !dumpstackobjects. The password used to extract the key should be on the .NET stack.

In Windbg

In Windbg

image11-1

Challenge Nine: you_are_very_good_at_this

  • Summary: Challenge 9 was an obfuscated command line password checking application
  • Techniques Used: Intel PIN, Windbg, Python, IDA Pro

Challenge 9, you_are_very_good_at_this, was a x86 Portable Executable command line application that took an argument and verified it against the expected password – a basic crack-me.

image20-1

Like all good reversers, we immediately open the application in IDA Pro, which, revealed an enormous wall of obfuscated code – clearly dynamic analysis is the way to go.

image28

To us, there are two clear ways of solving this challenge. The first uses pintool, the second, Windbg.

First Solution: Pin

We know that the crack-me is checking the command line input somehow, character by character, through mass amounts of operations. Luckily for us, we don’t really need to know more than that.

Using a simple instruction count Pintool (inscount0.cpp from Pin’s tutorial works perfectly) , we can count the instructions executed to check our input and determine whether it failed on the 1st character or failed on the nth character. This allows us to, byte by byte, brute force the password.

image23

It is apparent that there are more instructions executed in the second case, where the nth character is incorrect and exit() isn’t called until later in the execution of the program. We can use this knowledge to determine that the first n-1 characters are correct inputs.

image31

Using Python, we script the pintool to give us the instruction count of the binary’s execution using every possible printable character for the first character of the password.

Python Pseudocode

Python Pseudocode

Doing this, all inputs give us the same instruction count result except for the input containing the correct first character. This is because the correct character is the only one which passes the application’s validator. When this happens, the binary executes additional instructions that aren’t otherwise run.

Now we know one character, we add a for loop to our script to check for an outlier, and do the same thing for every character of the password… successfully leaking the password!

Last year, Flare-on Challenge 6 was also solvable in this exact way, thanks @gaasedelen for his detailed writeup on this.

Second Solution: Windbg/Python/IDA Pro

To solve this challenge the old fashioned way, we launch WinDbg and set a breakpoint on kernel32!ReadFile. We trace the kernel32!ReadFile caller and manually deobfuscate the password checking loop by cross-analyzing in IDA Pro.

The password checking loop uses a CMPXCHG instruction to compare the characters of the user supplied password and the expected password.

image00

We determined the registers of interest are the AL and BL registers. Tracing the dataflow for the registers of interest reveals that the AL register encounters some transformations, as a function of CL and AH, but ultimately derives from the user supplied buffer. This implies that the BL register contains the transformed character of the expected password.

Fortunately, we are able to precisely breakpoint at an instruction in the password verification loop and extract the necessary register values (namely the BL, CL, and AH registers) to decode the actual password.

image19

In Windbg

To decode the expected password, we take the printed BL, CL, and AH register values for each “character round” and implemented a Python function to reverse the XOR/ROL transformation done on AL.

Python Pseudocode

Python Pseudocode

We unearth the key by joining the output of ror_xor for each “character round”.

image27-1

Challenge Eleven: CryptoGraph

  • Summary: Challenge Eleven was an encrypted jpeg image using the rc5 algorithm and an obfuscated key. It turned out to be a solid ‘reversing’ challenge.
  • Techniques Used: RC5 Crypto Algorithm, Windbg, Resource segment carving

The final challenge. Challenge Eleven, CryptoGraph.exe, was a command line binary which, when no arguments were passed, created a junk jpeg file on the system. Looking closer, we see that the binary does accept one command line argument. However, when any number is passed, the binary loops ‘forever’.

image10

Opening the binary in IDA Pro, we assume that the flag will somehow appear in a properly created image. This means we start reversing by tracing up the calls from “WriteFile.”

image05

A few functions up we realize that the resource #124 is being loaded, decrypted and saved as this image file.

image07

The decrypted algorithm is easily identifiable as RC5 through Google. The first result is the Kaspersky report on the Equation Group and their use of RC5/6.

image17

Now all we need is the RC5 decryption key. Unlucky for us, the key is of length 16 bytes and cannot be easily brute forced. However, reversing further, we realize that the key is the result of two distinct RC5 decryption stages.

The first decryption is indexed into using a random index byte between 0x0 and 0xf. This creates an 8 byte key.

This key is then used in another RC5 decryption which actually takes the encrypted source (Resource_122) at an offset, a number which is the function of the same random index byte. This second stage, decrypts only 16 bytes, the 16 byte RC5 key needed for the encrypted jpeg, Resource_124.

Diagram showing the different decryption stages

Diagram showing the different decryption stages

Breaking in Windbg, we realize that the decryption of Resource_121 is what is causing the program to seemingly loop forever. In fact, the loops, which run from 0x0 to 0x20, are getting exponentially longer to execute each iteration.

Given the RC5 key length and the algorithm used for indexing into the decrypted Resource_121 which gives us this RC5 key, we determine that only one section of the resource is necessary.

Indexing Algorithm

Indexing Algorithm

Decrypting only the relevant bits of Resource_121 reduces the execution time significantly.
The indexing algorithm, which is not entirely deterministic, can, at max, index into the first 784 bytes of the decrypted resource.

Because each loop decrypts 48 bytes (hardcoded as an argument passed to the decrypt function), we need to let the main decryption loop run past 0x10 iterations before breaking out of the function.

Math Used to Calculate Loops Needed

Math Used to Calculate Loops Needed

Using Windbg, we break at the loops to stop after the 0x10th iteration. This means only parts of the key Resource_121 will be decrypted, but thankfully, that’s the only part the 8 byte key needs.

One last thing that needs to be brute forced isa single byte value between 0x0 and 0xf which affects the indexing algorithm. This byte affects the generation of the previously discussed 8 byte key as well as the index into Resource_122 from which a 16 byte key is decrypted.

Python-Windbg Pseudo Code

Python-Windbg Pseudo Code

Scripting this in Windbg (full script found here), we let the binary run 0xf times; each time stopping the loops after 0x10 iterations.

On the 0x9 iteration (the magic indexing byte), the correctly decrypted image is saved to a file, and the flag can be read out :].

image12

FIN

Thanks to the FLARE team at FireEye for putting these challenges and successfully forcing me to crack open my Windows VM and learn some new reversing tools. Hope next year’s are just as fun, obscure, and maybe a little harder.

References, Guides, & Tools

Things that we found useful for the challenges.

Hardware Side Channels in the Cloud

At REcon 2015, I demonstrated a new hardware side channel which targets co-located virtual machines in the cloud. This attack exploits the CPU’s pipeline as opposed to cache tiers which are often used in side channel attacks. When designing or looking for hardware based side channels – specifically in the cloud – I analyzed a few universal properties which define the ‘right’ kind of vulnerable system as well as unique ones tailored to the hardware medium.

Slides and full research paper found here.

The relevance of side channel attacks will only increase. Especially attacks which target the vulnerabilities inherent to systems which share hardware resources – such as in cloud platforms.

Virtualization of Physical Resources

Figure 1: virtualization of physical resources

BUT WHAT IS A SIDE CHANNEL ATTACK???

Any meaningful information that you can leak from the environment running the target application or, in this case, the victim virtual machine counts as a side channel. However, some information is better than others. In this case a process (the attacker) must be able to repeatedly record an environment ‘artifact’ from inside one virtual machine.

In the cloud, these environment artifacts are the shared physical resources used by the virtual machines. The hypervisor dynamically partitions each resource and this is then seen by a single virtual machine as its private resource. The side channel model (Figure 2) illustrates this.

Knowing this, the attacker can affect that resource partition in a recordable way, such as by flushing a line in the cache tier, waiting until the victim process uses it for an operation, then requesting that address again – recording what values are now there.

Figure 2: Side Channel Model

Figure 2: side channel model

ATTACK EXAMPLES

Great! So we can record things from our victim’s environment – but now what? Depending on what the victim’s process is doing we can actually employ several different types of attacks.

1. crypto key theft

Crypto keys are great, private crypto keys are even better. Using this hardware side channel, it’s possible to leak the bytes of the private key used by a co-located process. In one scenario, two virtual machines are allocated the same space on the L3 cache at different times. The attacker flushes a certain cache address, waits for the victim to use that address, then queries it again – recording the new values that are there [1].

2. process monitoring ( what applications is the victim running? )

This is possible when you record enough of the target’s behavior, i.e. CPU or pipeline usage or values stored in memory. A mapping between the recording to a specific running process can be constructed with a varied degree of certainty. Warning, this does rely on at least a rudimentary knowledge of machine learning.

3. environment  keying ( great for proving co-location! )

Using the environment recordings taken off of a specific hardware resource, you can also uniquely identify one server from another in the cloud. This is useful to prove that two virtual machines you control are co-resident on the same physical server. Alternatively, if you know the behavior signature of a server your target is on, you can repeatedly create virtual machines, recording the behavior on each system until you find a match [2].

4. broadcast signal ( receive messages without the internet :0 )

If a colluding process is purposefully generating behavior on a pre-arranged hardware resource, such as purposefully filling a cache line with 0’s and 1’s, the attacker (your process) can record this behavior in the same way it would record a victim’s behavior. You then can translate the recorded values into pre-agreed messages. Recording from different hardware mediums results in a channel with different bandwidths [3].

The Cache is Easy, the Pipeline is Harder

Now all of the above examples used the cache to record the environment shared by both victim and attacker processes. Cache is the most widely used in both literature and practice to construct side channels as well as being the easiest to record artifacts from. Basically everyone loves cache.

The cache isn’t the only shared resource: co-located virtual machines also share the CPU execution pipeline. In order to use the CPU pipeline, we must be able to record a value from it. However, there is no easy way for any process to query the state of the pipeline over time – it is like a virtual black-box. The only thing a process can know is the instruction set order it gives to be executed on the pipeline and the result the pipeline returns.

out-of-order execution

( the pipeline’s artifact )

We can exploit this pipeline optimization as a means to record the state of the pipeline. The known input instruction order will result in two different return values – one is the expected result(s), the other is the result if the pipeline executions them out-of-order.

Figure 3: Foreign Processes Can Share the Same Pipeline

Figure 3: foreign processes can share the same pipeline

strong memory ordering

Our target, cloud processors, can be assumed to be x86/64 architecture – implying a usually strongly-ordered memory model [4]. This is important because the pipeline will optimize the execution of instructions but attempt to maintain the right order of stores to memory and loads from memory

…HOWEVER, the stores and loads from different threads may be reordered by out-of-order-execution. Now this reordering is observable if we’re clever.

recording instruction reorder ( or how to be clever )

In order for the attacker to record the “reordering” artifact from the pipeline, we must record two things for each of our two threads:

  • input instruction order
  • return value

Additionally, the instructions in each thread must contain a STORE to memory and a LOAD from memory. The LOAD from memory must reference the location stored to by the opposite thread. This setup ensures the possibility for the four cases illustrated below. The last is the artifact we record – doing so several thousand times gives us averages over time.

Figure 4: the attacker can record when its instructions are reordered

Figure 4: the attacker can record when its instructions are reordered

sending a message

To make our attacks more interesting, we want to be able force the amount of recorded out-of-order-executions. This ability is useful for other attacks, such as constructing covert communication channels.

In order to do this, we need to alter how the pipeline’s optimization works – either by increasing the probability that it will or will not reorder our two threads. The easiest is to enforce a strong memory order and guarantee that the attacker will receive less out-of-order-executions.

memory barriers

In the x86 instruction set, there are specific barrier instructions that will stop the processor from reordering the four possible combinations of STORE’s and LOAD’s. What we’re interested in is forcing a strong order when the processor encounters an instruction set with a STORE followed by a LOAD.

The instruction mfence does exactly this.

By have the colluding process inject these memory barriers in the pipeline, the attacker’s instructions will not be reordered, forcing a noticeable decrease in the recorded averages. Doing this in distinct time frames allows us to send a binary message.

Figure 5: mfence ensures the strong memory order on pipeline

Figure 5: mfence ensures the strong memory order on pipeline

FIN

The takeaway is that even with virtualization separating your virtual machine from the hundreds of other alien virtual machines, the pipeline can’t distinguish your process’s instructions from all the other ones and we can use that to our advantage. :0

If you would like to learn more about this side channel technique, please read the full paper here.

  1. https://eprint.iacr.org/2013/448.pdf
  2. http://www.ieee-security.org/TC/SP2011/PAPERS/2011/paper020.pdf
  3. https://www.cs.unc.edu/~reiter/papers/2014/CCS1.pdf
  4. http://preshing.com/20120930/weak-vs-strong-memory-models/