How We Fared in the Cyber Grand Challenge

The Cyber Grand Challenge qualifying event was held on June 3rd, at exactly noon Eastern time. At that instant, our Cyber Reasoning System (CRS) was given 131 purposely built insecure programs. During the following 24 hour period, our CRS was able to identify vulnerabilities in 65 of those programs and rewrite 94 of them to eliminate bugs built in their code. This proves, without a doubt, that it is not only possible but achievable to automate the actions of a talented software auditor.

Despite the success of our CRS at finding and patching vulnerabilities, we did not qualify for the final event, to be held next year. There was a fatal flaw that lowered our overall score to 9th, below the 7th place threshold for qualification. In this blog post we’ll discuss how our CRS works, how it performed against competitor systems, what doomed its score, and what we are going to do next.

Cyber Grand Challenge Background

The goal of the Cyber Grand Challenge (CGC) is to combine the speed and scale of automation with the reasoning capabilities of human experts. Multiple teams create Cyber Reasoning Systems (CRSs) that autonomously reason about arbitrary networked programs, prove the existence of flaws in those programs, and automatically formulate effective defenses against those flaws. How well these systems work is evaluated through head-to-head tournament-style competition.

The competition has two main events: the qualifying event and the final event. The qualifying event was held on June 3, 2015. The final event is set to take place during August 2016. Only the top 7 competitors from the qualifying event proceed to the final event.

During the qualifying event, each competitor was given the same 131 challenges, or purposely built vulnerable programs, each of which contained at least one intentional vulnerability. For 24 hours, the competing CRSes faced off against each other and were scored according to four criteria. The full details are in the CGC Rules, but here’s a quick summary:

  • The CRS had to work without human intervention. Any teams found to use human assistance were disqualified.
  • The CRS had to patch bugs in challenges. Points were gained for every bug successfully patched. Challenges with no patched bugs received zero points.
  • The CRS could prove bugs exist in challenges. The points from patched challenges were doubled if the CRS could generate an input that crashed the challenge.
  • The patched challenges had to function and perform almost as well as the originals. Points were lost based on performance and functionality loss in the patched challenges.

A spreadsheet with all the qualifying event scores and other data used to make the graphs in this post is available from DARPA (Trail of Bits is the ninth place team). With the scoring in mind, let’s review the Trail of Bits CRS architecture and the design decisions we made.

Preparation

We’re a small company with a distributed workforce, so we couldn’t physically host a lot of servers. Naturally, we went with cloud computing to do processing; specifically, Amazon EC2. Those who saw our tweets know we used a lot of EC2 time. Most of that usage was purely out of caution.

We didn’t know how many challenges would be in the qualifying event — just that it would be “more than 100.” We prepared for a thousand, with each accompanied by multi-gigabyte network traffic captures. We were also terrified of an EC2 region-wide failure, so we provisioned three different CRS instances, one in each US-based EC2 region, affectionately named Biggie (us-east-1), Tupac (us-west-2), and Dre (us-west-1).

It turns out that there were only 131 challenges and no gigantic network captures in the qualifying event. During the qualifying event, all EC2 regions worked normally. We could have comfortably done the qualifying event with 17 c4.8xlarge EC2 instances, but instead we used 297. Out of our abundance of caution, we over-provisioned by a factor of ~17x.

Bug Finding

The Trail of Bits CRS was ranked second by the number of verified bugs found (Figure 1). This result is impressive considering that we started with nothing while several other teams already had existing bug finding systems prior to CGC.

Figure 1: Teams in the qualifying event ranked by number of bugs found. Orange bars signify finalists.

Our CRS used a multi-pronged strategy to find bugs (Figure 2). First, there was fuzzing. Our fuzzer is implemented with a custom dynamic binary translator (DBT) capable of running several 32-bit challenges in a single 64-bit address space. This is ideal for challenges that feature multiple binaries communicating with one another. The fuzzer’s instrumentation and mutation are separated, allowing for pluggable mutation strategies. The DBT framework can also snapshot binaries at any point during execution. This greatly improves fuzzing speed, since it’s possible to avoid replaying previous inputs when exploring new input space.

Figure 2: Our bug finding architecture. It is a feedback-based architecture that explores the state space of a program using fuzzing and symbolic execution.

Figure 2: Our bug finding architecture. It is a feedback-based architecture that explores the state space of a program using fuzzing and symbolic execution.

In addition to fuzzing, we had not one but two symbolic execution engines. The first operated on the original unmodified binaries, and the second operated on the translated LLVM from mcsema. Each symbolic execution engine had its own strengths, and both contributed to bug finding.

The fuzzer and symbolic execution engines operate in a feedback loop mediated by a system we call MinSet. The MinSet uses branch coverage to maintain a minimum set of maximal coverage inputs. The inputs come from any source capable of generating them: PCAPs, fuzzing, symbolic execution, etc. Every tool gets original inputs from MinSet, and feeds any newly generated inputs into MinSet. This feedback loop lets us explore the possible input state with both fuzzers and symbolic execution in parallel. In practice this is very effective. We log the provenance of our crashes, and most of them look something like:

Network Capture ⇒ Fuzzer ⇒ SymEx1 ⇒ Fuzzer ⇒ Crash

Some bugs can only be triggered when the input replays a previous nonce, which would be different on every execution of the challenge. Our bug finding system can produce inputs that contain variables based on program outputs, enabling our CRS to handle such cases.

Additionally, our symbolic executors are able to identify which inputs affect program state at the point of a crash. This is a key requirement for the success of any team competing in the final as it enables the CRS to create a more controlled crash.

Patching

Our CRS’s patching effectiveness, as measured by the security score, ranks as fourth (Figure 3).

Figure 3: Teams in the qualifying event ranked by patch effectiveness (security score). Orange bars signify finalists.

Figure 3: Teams in the qualifying event ranked by patch effectiveness (security score). Orange bars signify finalists.

Our CRS patches bugs by translating challenges into LLVM bitcode with mcsema. Patches are applied to the LLVM bitcode, optimized, and then converted back into executable code. The actual patching works by gracefully terminating the challenge when invalid memory accesses are detected. Patching the LLVM bitcode representation of challenges provides us with enormous power and flexibility:

  • We can easily validate any memory access and keep track of all memory allocations.
  • Complex algorithms, such as dataflow tracking, dominator trees, dead store elimination, loop detection, etc., are very simple to implement using the LLVM compiler infrastructure.
  • Our patching method can be used on real-world software, not just CGC challenges.

We created two main patching strategies: generic patching and bug-based patching. Generic patching is an exclusion-based strategy: it first assumes that every memory access must be verified, and then excludes accesses that are provably safe. The benefit of generic patching is that it patches all possible invalid memory accesses in a challenge. Bug-based patching is an inclusion-based strategy: it first assumes only one memory access (where the CRS found a bug) must be verified, and then includes nearby accesses that may be unsafe. Each patching strategy has multiple heuristics to determine which accesses should be included or excluded from verification.

The inclusion and exclusion heuristics generate patched challenges with different security/performance tradeoffs. The patched challenges generated by these heuristics were tested for performance and security to determine which heuristic performed best while still fixing the bug. For the qualifying event, we evaluated both generic and bug-based patching, but ultimately chose a generic-only patching strategy. Bug-based patching was slightly more performant, but generic patching was more comprehensive and it patched bugs that our CRS couldn’t find.

Functionality and Performance

Functionality and performance scores combine to create an availability score. The availability score is used as a scaling factor for points gained by patching and bug finding. This scaling factor only matters for successfully patched challenges, since those are the only challenges that can score points. The following graphs only consider functionality and performance of successfully patched challenges.

Functionality

Out of the 94 challenges that our CRS successfully patched, 56 retained full functionality, 30 retained partial functionality, and 8 were nonfunctional. Of the top 10 teams in the qualifying event, our CRS ranks 5th in terms of fully functional patched challenges (Figure 4). We suspect our patched challenges lost functionality due to problems in mcsema, our x86 to LLVM translator. We hope to verify and address these issues once DARPA open-sources the qualifying event challenges.

Figure 4: The count of perfectly functional, partially functional, and nonfunctional challenges submitted by each of the top 10 teams in the qualifying event. Orange bars signify finalists.

Figure 4: The count of perfectly functional, partially functional, and nonfunctional challenges submitted by each of the top 10 teams in the qualifying event. Orange bars signify finalists.

Performance

The performance of patched challenges is how our CRS snatched defeat from the jaws of victory. Of the top ten teams in the qualifying event, our CRS placed last in terms of patched challenge performance (Figure 5).

Figure 5: Average and median performance scores of the top ten qualifying event participants. Orange bars signify finalists.

Figure 5: Average and median performance scores of the top ten qualifying event participants. Orange bars signify finalists.

Our CRS produces slow binaries for two reasons: technical and operational. The technical reason is that performance of our patched challenges is an artifact of our patching process, which translates challenges into LLVM bitcode and then re-emits them as executable binaries. The operational reason is that our patching was developed late and optimized for the wrong performance measurements.

So, why did we optimize for the wrong performance measurements? The official CGC performance measurement tools were kept secret, because the organizers wanted to ensure that no one could cheat by gaming the performance measurements. Therefore, we had to measure performance ourselves, and our metrics showed that CPU overhead of our patched challenges was usually negligible. The main flaw that we observed was that our patched challenges used too much memory. Because of this, we spent time and effort optimizing our patching to use less memory in favor of using more CPU time.

It turns out we optimized for the wrong thing, because our self-measurement did not agree with the official measurement tools (Table 1). When self-measuring, our worst-performing patching method had a median CPU overhead of 33% and a median memory overhead of 69%. The official qualifying event measured us at 76% CPU overhead and 28% memory overhead. Clearly, our self-measurements were considerably different from official measurements.

Measurement Median CPU Overhead Median Memory Overhead
Worst Self-Measured Patching Method 33% 69%
Official Qualifying Event 76% 28%

Table 1: Self measured CPU and memory overhead and the official qualifying event CPU and memory overhead.

Our CRS measured its overall score with our own performance metrics. The self-measured score of our CRS was 106, which would have put us in second place. The real overall score was 21.36, putting us in ninth.

An important aspect of software development is choosing where to focus your efforts, and we chose poorly. CGC participants had access to the official measuring system during two scored events held during the year, one in December 2014 and one in April 2015. We should have evaluated our patching system thoroughly during both scored events. Unfortunately, our patching wasn’t fully operational until after the second scored event, so we had no way to verify the accuracy of our self-measurement. The performance penalty of our patching isn’t a fundamental issue. Had we known how bad it was, we would have fixed it. However, according to our own measurements the patching was acceptable so we focused efforts elsewhere.

What’s Next?

According to the CGC FAQ (Question 46), teams are allowed to combine after the qualifying event. We hope to join forces with another team that qualified for the CGC final event, and use the best of both our technologies to win. The technology behind our CRS will provide a significant advantage to any team that partners with us. If you would like to discuss a potential partnership for the CGC final, please contact us at cgc@trailofbits.com.

If we cannot find a partner for the CGC final, we will focus our efforts on adapting our CRS to automatically find and patch vulnerabilities in real software. Our system is up to the task: it has already proven that it can find bugs, and all of its core components were derived from software that works on real Linux binaries. Several components even have Windows and 64-bit support, and adding support for other platforms is a possibility. If you are interested in commercial applications of our technology, please get in touch with us at cgc@trailofbits.com.

Finally, we plan to contribute back fixes and updates to the open source projects utilized in our CRS. We used numerous open source projects during development, and have made several custom fixes and modifications. We look forward to contributing these back to the community so that everyone benefits from our improvements.

How to Harden Your Google Apps

Never let a good incident go to waste.

Today, we’re using the OPM incident as an excuse to share with you our top recommendations for shoring up the security of your Google Apps for Work account.

More than 5 million companies rely on Google Apps to run their critical business functions, like email, document storage, calendaring, and chat. As a result, a huge amount of data pools inside Google Apps just waiting for an attacker to gain access to it. In any modern company, this is target #1.

This guide is for small businesses who want avoid the worst security problems while expending minimal effort. If you’re in a company with more than 500 employees, and have dedicated IT staff, this guide is not for you.

Risks

A lot that can go wrong with computers, even when you eliminate the complexity of client applications and move to a cloud-hosted platform like Google Apps. Many people tend to think too abstractly about security to reason about concrete steps to improve themselves. In this context, here are the attacks we’re concerned about:

  • Password management. Users occasionally reuse passwords, surrender them to successful phishing, or lose all of them due to poor choice of password manager.
  • Cross-Site Scripting (XSS). Google has an enormous number of web applications under active development. They routinely acquire and add new companies to their domain. Some new vulnerabilities might be tucked into this torrent of fresh code. Any one XSS can result in a lost cookie that logs an attacker into your Google account.
  • Inadvertent Disclosure. Permissions management is hard. The user interface for Google Docs does not make it easier. Internal documents, calendars, and more can end up publicly available and indexed by search.
  • Backdoored Accounts. In the event of a successful compromise of one user’s account, the attacker will seek to preserve access so they can come back later. Backdoored Google Apps accounts can continue to leak emails even after you format an infected computer.
  • Exploits and Malware. Even with an all-Chromebook fleet (which we wholeheartedly recommend), there is a chance that computers will get infected and malware will ride on the back of legitimate sessions to gain access to your accounts.

Top 8 Google Apps Security Enhancements

If you make these few changes, you’ll be miles ahead of most other people and at considerably less risk to any of the above scenarios.

1. Create a secure Super Administrator account

In admin.google.com, create a new admin account for your domain. You’ll only use this account to administer your domain; no email, no chat. Stay logged out of it. Set the secondary, recovery email to a secure mail host (like your personal Gmail). Turn on 2FA or use a Security Key for both accounts.

Separate the role for administrative access to your domain

Separate the role for administrative access to your domain

2. Plug the leaks in your email policy

Gmail provides a wealth of options that allow users to forward, share, report, or disclose their emails to third parties. Any of these options could enable an inadvertent disclosure or provide a handy backdoor to an attacker who has lost their primary method of access. Disable read receipts, mail delegation, emailing profiles, automatic forwarding, and outbound gateways.

Limit what can go wrong with email

Limit what can go wrong with email

Disable automatic forward

Disable automatic forward

Keep your mail to yourself

Keep work email configurations clean

3. Enable 2-Step Verification (2SV) and review your enrollment reports

2SV (or, as it’s more commonly referred, 2-factor Authentication or 2FA) will save your ass. With 2FA switched on, stolen passwords won’t be enough to compromise accounts. Hundreds of services support it. You should encourage your users to turn it on everywhere. Heck, just buy a bunch of Security Keys and hand them out like health workers would condoms.

Why is this even an option? Turn it on already!

Why is this even an option? Turn it on already!

Note: The advanced settings expose an option to force 2FA on every user on your domain. To use this feature properly, you must create an exception group to allow new users to set up their accounts. tl;dr Ignore the enforcement feature and just go bop your users over the head when you see they haven’t turned 2FA on yet.

4. Delete or suspend unmaintained user accounts

Stale accounts have accumulated sensitive data yet have no one to watch over them. Over the lifetime of an account, it may have connected to dozens of apps, left its password saved in mobile and client apps, and shared public documents now left forgotten and unmaintained. Reduce the risk of these accounts by deleting or suspending them.

Delete or suspend unmaintained accounts

Delete or suspend unmaintained accounts

5. Reduce your data’s exposure to third parties

The default settings for Mail, Drive, Talk, and Sites can lead to over-sharing of data. Retain the flexibility for employees to choose the appropriate setting, but tighten the defaults to start with the data private and warn users when it is not. Currently, there is no universal control; you have to make changes to each Google app individually.

Disable contact sharing

Disable contact sharing (a great way to determine who your CEO talks to)

Stricter defaults for Drive

Stricter defaults for Drive

Stricter defaults for Drive

Stricter defaults for Drive

Help users recognize who they are talking to

Help users recognize who they are talking to

Don't overstore data if you don't need to

Don’t overstore data if you don’t need to

Help users understand who can see their Site

Help users understand who can see their Site

6. Prevent email forgery using your domain name

Left unprotected, it is easy for an attacker to spoof an email that looks like it came from your CEO and send it to your staff, partners, or clients. Ensure this does not happen. Turn on SPF and DKIM to authenticate email for your domain. Both require modifications to TXT records in your DNS settings.

Turn on DKIM for your domain and get this green check

Turn on DKIM for your domain and get this green check

7. Disable services from Google that you don’t need

Cross-site Scripting (XSS) and other client-side web application flaws are an underappreciated method for performing targeted hacks. DOM XSS can be used as a method of persistence. Labelling a bug as “post-authentication” means little when you stay logged into your Google account all day. Disable access to Google services you don’t use. That will help limit the amount of code your cookies are exposed to.

There are dozens of services you'll never use. Disable them.

There are dozens of services you’ll never use. Disable them.

8. Set booby traps for the hacker that makes it in anyway

Your defenses will give way at some point. When this happens, you’ll want to know it, fast. Enable predefined alerts to receive an email when major changes are made to your Google Apps. Turn on alerts for suspicious login activity, admin privileges added or revoked, users added or deleted, and any settings changes. Send the alerts to a normal user, since you wouldn’t be logged into the Super Administrator regularly.

Turn on alerts and be liberal with who gets them

Turn on alerts and be liberal with who gets them

Security Wishlist for Google Apps

Google Apps offers one of the most secure platforms for running outsourced IT services for your company. However, even the configuration above leaves some blind spots.

Better support for inbound attachment filtering

Attackers will email your users malicious attachments or links. This problem is largely one for the endpoint (and Google offers Chromebooks as one solution), but an email provider can do more to mitigate this tactic.

The Google Apps settings for Gmail offers an “attachment compliance” feature that, while not specifically made for security, could be enhanced to protect users from malicious attachments. Gmail could prepend a message to the email subject that includes a warning about certain attachments, quarantining attachments with certain features (e.g. macros), sending attachments to a third-party service for analysis via an ICAP-like protocol, or converting attachments (say, doc to docx).

If we took this idea even further, Gmail could strip the attachments entirely and place them in Google Drive. This would make it easier to remove access to the attachment in the event it was identified as malicious and it would make it easier to perform repeated analyses of past attachments to discover previously unknown malicious content.

Attachment compliance options could be useful to play with

Tune attachment compliance options to protect users from malicious attachments

Better management of 2FA enforcement

Google was the first major service provider to roll out 2FA to all their users. Their support for this technology has been nothing short of tremendous. But it’s still too hard to enforce across your domain in Google Apps.

Turning on organization-wide enforcement requires setting up an exception group and performing extra work each time you add a new user to your domain. Could Google require 2FA on first sign-in, or give new users a configurable X-day grace period during which they could use just a password? How about bulk discounts on Security Keys?

Built-in management and reporting for DMARC

Domain Message Authentication Reporting and Conformance (DMARC), like SPF and DKIM, was designed to enhance the security and deliverability of the email you send. DMARC can help you discover how and when other people may be sending email in your name. If you want to turn on DMARC for your Google Apps, you’re pretty much on your own.

Google should make it easier to turn on DMARC and provide the tools to help manage it. This is a no-brainer, and it should be, considering email is their flagship feature.

End-to-end crypto on all their services

If the data for your organization were stored encrypted on Google servers, you wouldn’t have to worry as much about password disclosures, snooping Google employees, or security incidents at Google. Anyone who gained access to your data, but lacked the proper key, would be unable to read it.

Google’s End-to-End project will help users deploy email crypto. If you want this feature today, the S/MIME standard is supported out-of-the-box on Mail.app, iOS, Outlook, Thunderbird, and more. Amazon WorkMail, a competitor to Google Apps, allows client-managed keys. By encrypting 100% of your internal email, their contents are unreadable to third parties that happen to gain access to your accounts.

However, this still leaves sensitive data that lives unprotected on other services, like Hangouts and Drive. Yes, there are alternatives, but none are ideal in this scenario. You could deploy your own, in-house secure videoconferencing or consider adopting tarsnap but the inconvenience is still too great. This problem is still waiting for a solution in Google Apps.

If You Have a Problem

By now, your Google Apps domain should be less vulnerable. So, what happens if you discover one of your users has been hacked? Google has you covered here. Review the “Administrator security checklist” if you think you have a problem. Their step-by-step guide is nearly everything you need to get started responding to a security incident.

Feedback

I hope that you have found this guide useful. What do you use to help secure your Google Apps? Are there features on your wishlist for Google Apps that I missed? Did I miss something?

UPDATE 1:

GCHQ released a guide for securing Google Apps in November, 2015.

Introducing the RubySec Field Guide

Vulnerabilities have been discovered in Ruby applications with the potential to affect vast swathes of the Internet and attract attackers to lucrative targets online.

These vulnerabilities take advantage of features and common idioms such as serialization and deserialization of data in the YAML format. Nearly all large, tested and trusted open-source Ruby projects contain some of these vulnerabilities.

Few developers are aware of the risks.

In our RubySec Field Guide, you’ll cover recent Ruby vulnerabilities classes and their root causes. You’ll see demonstrations and develop real-world exploits. You’ll study the patterns behind the vulnerabilities and develop software engineering strategies to avoid these vulnerabilities in your projects.

You Will Learn

  • The mechanics and root causes of past Rails vulnerabilities
  • Methods for mitigating the impact of deserialization flaws
  • Rootkit techniques for Rack-based applications via YAML deserialization
  • Mitigations techniques for YAML deserialization flaws
  • Defensive Ruby programming techniques
  • Advanced testing techniques and fuzzing with Mutant

We’ve structured this field guide so you can learn as quickly as you want, but if you have questions along the way, contact us. If there’s enough demand, we may even schedule an online lecture.

Now, to work.

-The Trail of Bits Team

Closing the Windows Gap

The security research community is full of grey beards that earned their stripes writing exploits against mail servers, domain controllers, and TCP/IP stacks. These researchers started writing exploits on platforms like Solaris, IRIX, and BSDi before moving on to Windows exploitation. Now they run companies, write policy, rant on twitter, and testify in front of congress. I’m not one of those people; my education in security started after Windows Vista and then expanded through Capture the Flag competitions when real-world research got harder. Security researchers entering the industry post-20101 learn almost exclusively via Capture the Flags competitions.

Occasionally, I’ll try to talk a grey beard into playing capture the flag. It’s like trying to explain Pokemon to adults. Normally such endeavors are an exercise in futility; however, on a rare occasion they’ll also get excited and agree to try it out! They then get frustrated and stuck on the same problems I do – it’s fantastic for my ego2.

“Ugh, it’s 90s shellcoding problems applied today.”
— muttered during DEFCON 22 CTF Quals

Following a particularly frustrating CTF we were discussing challenges and how there are very few Windows challenges despite Windows being such an important part of our industry. Only the Russian CTFs release Windows challenges; none of the large American CTFs do.

Much like Cold War-era politics, the Russian (CTFs) have edged out a Windows superiority, a Windows gap.

Projected magnitude of the Windows gap

Projected magnitude of the Windows gap

The Windows gap exists outside of CTF as well. Over the past few years the best Windows security research has come out of Russia3 and China. So, why are the Russians and Chinese so good at Windows? Well, because they actually use Windows…and for some reason western security researchers don’t.

Let’s close this Windows gap. Windows knowledge is important for our industry.

Helping the CTF community

If Capture the Flag competitions are how today’s greenhorns cut their teeth, we should have more Windows-based challenges and competitions. To facilitate this, Trail of Bits is releasing AppJailLauncher, a framework for making exploitable Windows challenges!

This man knows Windows and thinks you should too.

This man knows Windows and thinks you should too.

As a contest organizer, securing your infrastructure is the biggest priority and securing Windows services has always been a bit tricky until Windows 8 and the introduction of AppContainers. AppJailLauncher uses AppContainers to keep everything nice and secure from griefers. The repository includes everything you need to isolate a Windows TCP service from the rest of the operating system.

Additionally, we’re releasing the source code to greenhornd, a 2014 CSAW challenge I wrote to introduce people to Windows exploitation and the best debugger yet developed: WinDbg. The repository includes the binary as released, deployment directions, and a proof-of-vulnerability script.

We’re hoping to help drag the CTF community kicking and screaming into Windows expertise.

Windows Reactions

Releasing a Windows challenge last year at CSAW was very entertaining. There was plenty of complaining4:

<dwn> how is this windows challenge only 200 points omg
<dwn> making the vuln obvious doesn’t make windows exploitation any easier ;_;

<mserrano> RyanWithZombies: dude but its fuckin windows
<mserrano> even I don’t use windows anymore
<@RyanWithZombies> i warned you guys for months
<mserrano> also man windows too hard

<geohot> omg windows
<geohot> is so hard
<geohot> will do tomorrow
<geohot> i don’t have windows vm

<ebeip90> zomg a windows challenge
<ebeip90>❤
[ hours later ]
<ebeip90> remember that part a long time ago when I said “Oh yay, a Windows challenge”?

<ricky> Windows is hard
<miton> ^

Some praise:

<cai_> i liked your windows one btw🙂

<MMavipc> RyanWithZombies pls more windows pwning/rce

<CTFBroforce> I was so confused I have never done a windows exploit
<CTFBroforce> this challenge is going to make me look into windows exploits
<CTFBroforce> I dont know how to write windows shell code

<spq> thx for the help and the force to exploit windows with shellcode for the first time🙂

It even caused some arguments among competitors:

<clockish> dudes, shut up, windows is hard
<MMavipc> windows is easy
<MMavipc> linux is hard

We hope AppJailLauncher will be used to elicit more passionate responses over the next few years!

Footnotes
  1. Many of the most popular CTFs started in 2010 and 2011: Ghost in the Shellcode (2010), RuCTFe (2010), PlaidCTF (2011), Codegate (2011), PHDays (2011). Very few predate 2010.
  2. Much like watching geohot fail at format string exploitation during a LiveCTF broadcast: https://www.youtube.com/watch?v=td1KEUhlSuk
  3. Try searching for obscure Windows kernel symbols, you’ll end up on a Russian forum.
  4. The names have not been changed to shame the enablers.

Empire Hacking, a New Meetup in NYC

Today we are launching Empire Hacking, a bi-monthly meetup that focuses on pragmatic security research and new discoveries in attack and defense.

EmpireHacking_Poster_Final

It’s basically a security poetry jam

Empire Hacking is technical. We aim to bridge the gap between weekend projects and funded research. There won’t be any product pitches here. Come prepared with your best ideas.

Empire Hacking is exclusive. Talks are by invitation-only and are under Chatham House Rule. We will discuss ongoing research and internal projects you won’t hear about anywhere else.

Empire Hacking is engaging. Talk about subjects you find interesting, face to face, with a community of experts from across the industry.

Each meetup will consist of short talks from three expert speakers and run from 6-9pm at Trail of Bits HQ. Tentative schedule: Even months, on Patch Tuesday (the 2nd Tuesday). Beverages and light food will be provided. Space is limited. Please apply on our Meetup page.

Our inaugural meetup will feature talks from Chris Rohlf, Dr. Byron Cook, and Nick DePetrillo on Tuesday, June 9th.

Offense at Scale

Chris will discuss the effects of scale on vulnerability research, fuzzing and real attack campaigns.

Chris Rohlf runs the penetration testing team at Yahoo in NYC. Before Yahoo he was the founder of Leaf Security Research, a highly-specialized security consultancy with expertise in vulnerability discovery, reversing and exploit development.

Automatically proving program termination (and more!)

Byron will discuss research advances that have led to practical tools for automatically proving program termination and related properties.

Dr. Byron Cook is professor of computer science at University College London.

Cellular Baseband Exploitation

Baseband exploitation has been a topic of interest for many, however, few have described the effort required to make such attacks practical. In this talk, we explore the challenges towards reliable, large-scale cellular baseband exploitation.

Nick DePetrillo is a principal security engineer at Trail of Bits with expertise in cellular hardware and infrastructure security.

Keep up with Empire Hacking by following us on Twitter. See you at a meetup!

Frequently Asked Questions

Why is Empire Hacking a membership-based group?

To cultivate a tight-knit community. This should be a place where members feel free to discuss private or exclusive research and data, knowing that it will remain within the group. Furthermore, we believe that a membership process increases motivation to make a high-quality contribution.

To protect against abuse. Everyone is expected to treat his or her fellow members with respect and decency. Violators lose membership and all access to the group, including membership lists, meeting locations, and our discussion board.

To follow the crowd. Not really. But seriously, we are hardly the first private meetup or group in security. Consider that NCC Open Forum “is by invite only and is limited to engineers and technical managers”, NY Information Security Meetup charges $5 to attend, and Ops-T “does not accept applications for membership.”

Why does Empire Hacking use Chatham House Rules?

We welcome everyone to apply to Empire Hacking, even journalists. But we don’t want participants to worry that their personal thoughts will be relayed to outsiders, or used against them or their employers. We enforce Chatham House Rules to preserve the balance between candor and discretion.

How can I attend a meetup?

Please apply on our meetup.com page. If you have any trouble, feel free to reach out to any of the Trail of Bits staff, including on our Slack community for Empire Hacking.

The Foundation of 2015: 2014 in Review

We need to do more to protect ourselves. 2014 overflowed with front-page proof: Apple, Target, JPMorgan Chase, etc, etc.

The current, vulnerable status quo begs for radical change, an influx of talented people, and substantially better tools. As we look ahead to driving that change in 2015, we’re proud to highlight a selection of our 2014 accomplishments that will underpin that work.

1. Open-source framework to transform binaries to LLVM bitcode

Our framework for analyzing and transforming machine-code programs to LLVM bitcode became a new tool in the program analysis and reverse engineering communities. McSema connects the world of LLVM program analysis and manipulation tools to binary executables. Currently it supports the translation of semantics for x86 programs and supports subsets of integer arithmetic, floating point, and vector operations.

2. Shaped smarter public policy

The spate of national-scale computer security incidents spurred anxious conversation and action. To pre-empt poorly conceived laws from poorly informed lawmakers, we worked extensively with influential think tanks to help educate our policy makers on the finer points of computer security. The Center for a New American Security’s report “Surviving on a Diet of Poisoned Fruit” was just one result of this effort.

3. More opportunities for women

As part of our ongoing collaboration with NYU-Poly, Trail of Bits put its support behind the CSAW Program for High School Women and Career Discovery in Cyber Security Symposium. These events are intended to help guide talented and interested women into careers in computer security. We want to create an environment where women have the resources to contribute and excel in this industry.

4. Empirical data on secure development practices

In contrast with traditional security contests, Build-it, Break-it, Fix-it rewards secure software development under the same pressures that lead to bugs: tight deadlines, performance requirements, competition, and the allure of money. We were invited to share insights from the event at Microsoft’s Bluehat v14.

5. Three separate Cyber Fast Track projects

Under DARPA’s Program Manager Peiter ‘Mudge’ Zatko, we completed three distinct projects in the revolutionary Cyber Fast Track program: CodeReason, MAST, and PointsTo. Five of our employees went to the Pentagon to demonstrate our creations to select members of the Department of Defense. We’re happy to have participated and been recognized for our work. We’re now planning on giving back; CodeReason will be making an open-source release in 2015!

6. Taught machines to find Heartbleed

Heartbleed, the infamous OpenSSL vulnerability, went undetected for so long because it’s hard for static analyzers to detect. So, Andrew Ruef took on the challenge and wrote a checker for clang-analyzer that can find Heartbleed and other bugs like it automatically. We released the code for others to learn from.

7. A resource for students of computer security

One of the most fun and effective ways to learn computer security is by competing in Capture the Flag events. But many fledgling students don’t know where to get started. So we wrote the Capture the Flag Field Guide to help them get involved and encourage them to take the first steps down this career path.

8. The iCloud Hack spurs our two-factor authentication guide

Adding two-factor authentication is always a good idea. Just ask anyone whose account has been compromised. If you store any sensitive information with Google, Apple ID or Dropbox, you’ll want to know about our guide to adding an extra layer of protection to your accounts.

9. Accepted into DARPA’s Cyber Grand Challenge

The prize: $2 million. The challenge: Build a robot that can repair insecure software without human input. If successful, this program will have a profound impact on the way companies secure their data in the future. We were selected as one of seven funded teams to compete.

10. THREADS 2014: How to automate security

Our CEO Dan Guido chaired THREADS, a research and development conference that takes place at NYU-Poly’s Cyber Security Awareness Week (CSAW). This year’s theme focused on scaling security — ensuring that security is an integral and automated part of software development and deployment models. We believe that the success of automated security is essential to our ever more internetworked society and devices. See talks and slides from the event.

Looking ahead.

This year, we’re excited to develop and share more code, including: improvements to McSema (i.e. support for LLVM 3.5, lots more SSE and FPU instruction support, and a new control flow recovery module based on JakStab), a private videochat service, and an open-source release of CodeReason. We’re also excited about Ghost in the Shellcode (GitS) — a capture the flag competition at ShmooCon in Washington DC in January that three of our employees are involved in running. And don’t forget about DARPA’s Cyber Grand Challenge qualifying event in June.

For now, we hope you’ll connect with us on Twitter or subscribe to our newsletter.

Close Encounters with Symbolic Execution (Part 2)

This is part two of a two-part blog post that shows how to use KLEE with mcsema to symbolically execute Linux binaries (see the first post!). This part will cover how to build KLEE, mcsema, and provide a detailed example of using them to symbolically execute an existing binary. The binary we’ll be symbolically executing is an oracle for a maze with hidden walls, as promised in Part 1.

As a visual example, we’ll show how to get from an empty maze to a solved maze:

Maze (Before) Maze (After)

Building KLEE with LLVM 3.2 on Ubuntu 14.04

One of the hardest parts about using KLEE is building it. The official build instructions cover KLEE on LLVM 2.9 and LLVM 3.4 on amd64. To analyze mcsema generated bitcode, we will need to build KLEE for LLVM 3.2 on i386. This is an unsupported configuration for KLEE, but it still works very well.

We will be using the i386 version of Ubuntu 14.04. The 32-bit version of Ubuntu is required to build a 32-bit KLEE. Do not try adding -m32 to CFLAGS on a 64-bit version. It will take away hours of your time that you will never get back. Get the 32-bit Ubuntu. The exact instructions are described in great detail below. Be warned: building everything will take some time.

# These are instructions for how to build KLEE and mcsema. 
# These are a part of a blog post explaining how to use KLEE
# to symbolically execute closed source binaries.
 
# install the prerequisites
sudo apt-get install vim build-essential g++ curl python-minimal \
  git bison flex bc libcap-dev cmake libboost-dev \
  libboost-program-options-dev libboost-system-dev ncurses-dev nasm
 
# we assume everything KLEE related will live in ~/klee.
cd ~
mkdir klee
cd klee
 
# Get the LLVM and Clang source, extract both
wget http://llvm.org/releases/3.2/llvm-3.2.src.tar.gz
wget http://llvm.org/releases/3.2/clang-3.2.src.tar.gz
tar xzf llvm-3.2.src.tar.gz
tar xzf clang-3.2.src.tar.gz
 
# Move clang into the LLVM source tree:
mv clang-3.2.src llvm-3.2.src/tools/clang
 
# normally you would use cmake here, but today you HAVE to use autotools.
cd llvm-3.2.src
 
# For this example, we are only going to enable only the x86 target.
# Building will take a while. Go make some coffee, take a nap, etc.
./configure --enable-optimized --enable-assertions --enable-targets=x86
make
 
# add the resulting binaries to your $PATH (needed for later building steps)
export PATH=`pwd`/Release+Asserts/bin:$PATH
 
# Make sure you are using the correct clang when you execute clang — you may 
# have accidentally installed another clang that has priority in $PATH. Lets 
# verify the version, for sanity. Your output should match whats below.
# 
#$ clang --version
#clang version 3.2 (tags/RELEASE_32/final)
#Target: i386-pc-linux-gnu
#Thread model: posix
 
# Once clang is built, its time to built STP and uClibc for KLEE.
cd ~/klee
git clone https://github.com/stp/stp.git
 
# Use CMake to build STP. Compared to LLVM and clang,
# the build time of STP will feel like an instant.
cd stp
mkdir build && cd build
cmake -G 'Unix Makefiles' -DCMAKE_BUILD_TYPE=Release ..
make
 
# After STP builds, lets set ulimit for STP and KLEE:
ulimit -s unlimited
 
# Build uclibc for KLEE
cd ../..
git clone --depth 1 --branch klee_0_9_29 https://github.com/klee/klee-uclibc.git
cd klee-uclibc
./configure -l --enable-release
make
cd ..
 
# It’s time for KLEE itself. KLEE is updated fairly often and we are 
# building on an unsupported configuration. These instructions may not 
# work for future versions of KLEE. These examples were tested with 
# commit 10b800db2c0639399ca2bdc041959519c54f89e5.
git clone https://github.com/klee/klee.git
 
# Proper configuration of KLEE with LLVM 3.2 requires this long voodoo command
cd klee
./configure --with-stp=`pwd`/../stp/build \
  --with-uclibc=`pwd`/../klee-uclibc \
  --with-llvm=`pwd`/../llvm-3.2.src \
  --with-llvmcc=`pwd`/../llvm-3.2.src/Release+Asserts/bin/clang \
  --with-llvmcxx=`pwd`/../llvm-3.2.src/Release+Asserts/bin/clang++ \
  --enable-posix-runtime
make
 
# KLEE comes with a set of tests to ensure the build works. 
# Before running the tests, libstp must be in the library path.
# Change $LD_LIBRARY_PATH to ensure linking against libstp works. 
# A lot of text will scroll by with a test summary at the end.
# Note that your results may be slightly different since the KLEE 
# project may have added or modified tests. The vast majority of 
# tests should pass. A few tests fail, but we’re building KLEE on 
# an unsupported configuration so some failure is expected.
export LD_LIBRARY_PATH=`pwd`/../stp/build/lib
make check
 
#These are the expected results:
#Expected Passes : 141
#Expected Failures : 1
#Unsupported Tests : 1
#Unexpected Failures: 11
 
# KLEE also has a set of unit tests so run those too, just to be sure. 
# All of the unit tests should pass!
make unittests
 
# Now we are ready for the second part: 
# using mcsema with KLEE to symbolically execute existing binaries.
 
# First, we need to clone and build the latest version of mcsema, which
# includes support for linked ELF binaries and comes the necessary
# samples to get started.
cd ~/klee
git clone https://github.com/trailofbits/mcsema.git
cd mcsema
git checkout v0.1.0
mkdir build && cd build
cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Release ..
make
 
# Finally, make sure our environment is correct for future steps
export PATH=$PATH:~/klee/llvm-3.2.src/Release+Asserts/bin/
export PATH=$PATH:~/klee/klee/Release+Asserts/bin/
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/klee/stp/build/lib/

Translating the Maze Binary

The latest version of mcsema includes the maze program from Felipe’s blog in the examples as demo_maze. In the instructions below, we’ll compile the maze oracle to a 32-bit ELF binary and then convert the binary to LLVM bitcode via mcsema.

# Note: tests/demo_maze.sh completes these steps automatically
cd ~/klee/mcsema/mc-sema/tests
# Load our environment variables
source env.sh
# Compile the demo to a 32-bit ELF executable
${CC} -ggdb -m32 -o demo_maze demo_maze.c
# Recover the CFG using mcsema's bin_descend
${BIN_DESCEND_PATH}/bin_descend -d -func-map=maze_map.txt -i=demo_maze -entry-symbol=main
# Convert the CFG into LLVM bitcode via mcsema's cfg_to_bc
${CFG_TO_BC_PATH}/cfg_to_bc -i demo_maze.cfg -driver=mcsema_main,main,raw,return,C -o demo_maze.bc
# Optimize the bitcode
${LLVM_PATH}/opt -O3 -o demo_maze_opt.bc demo_maze.bc

We will use the optimized bitcode (demo_maze_opt.bc) generated by this step as input to KLEE. Now that everything is set up, let’s get to the fun part — finding all maze solutions with KLEE.

# create a working directory next to the other KLEE examples.
cd ~/klee/klee/examples
mkdir maze
cd maze
# copy the bitcode generated by mcsema into the working directory
cp ~/klee/mcsema/mc-sema/tests/demo_maze_opt.bc ./
# copy the register context (needed to build a drive to run the bitcode)
cp ~/klee/mcsema/mc-sema/common/RegisterState.h ./

Now that we have the maze oracle binary in LLVM bitcode, we need to tell KLEE which inputs are symbolic and when a maze is solved. To do this we will create a small driver that will intercept the read() and exit() system calls, mark input to read() as symbolic, and assert on exit(1), a successful maze solution.

To make the driver, create a file named maze_driver.c with contents from the this gist and use clang to compile the maze driver into bitcode. Every function in the driver is commented to help explain how it works. 

clang -I../../include/ -emit-llvm -c -o maze_driver.bc maze_driver.c

We now have two bitcode files: the translation of the maze program and a driver to start the program and mark inputs as symbolic. The two need to be combined into one bitcode file for use with KLEE. The two files can be combined using llvm-link. There will be a compatibility warning, which is safe to ignore in this case.

llvm-link demo_maze_opt.bc maze_driver.bc > maze_klee.bc

Running KLEE

Once we have the combined bitcode, let’s do some symbolic execution. Lots of output will scroll by, but we can see KLEE solving the maze and trying every state of the program. If you recall from the driver, we can recognize successful states because they will trigger an assert in KLEE. There are four solutions to the original maze, so let’s see how many we have. There should be 4 results — a good sign (note: your test numbers may be different):

klee --emit-all-errors -libc=uclibc maze_klee.bc
# Lots of things will scroll by
ls klee-last/*assert*
# For me, the output is:
# klee-last/test000178.assert.err  klee-last/test000315.assert.err
# klee-last/test000270.assert.err  klee-last/test000376.assert.err

Now let’s use a quick bash script to look at the outputs and see if they match the original results. The solutions identified by KLEE from the mcsema bitcode are:

  • sddwddddsddw
  • ssssddddwwaawwddddsddw
  • sddwddddssssddwwww
  • ssssddddwwaawwddddssssddwwww

… and they match the results from Felipe’s original blog post!

Conclusion

Symbolic execution is a powerful tool that can execute programs on all inputs at once. Using mcsema and KLEE, we can symbolically execute existing closed source binary programs. In this example, we found all solutions to a maze with hidden walls — starting from an opaque binary. KLEE and mcsema could do this while knowing nothing about mazes and without being tuned for string inputs.

This example is simple, but it shows what is possible: using mcsema we can apply the power of KLEE to closed source binaries. We could generate high code coverage tests for closed source binaries, or find security vulnerabilities in arbitrary binary applications.

Note: We’re looking for talented systems engineers to work on mcsema and related projects (contract and full-time). If you’re interested in being paid to work on or with mcsema, send us an email!

Close Encounters with Symbolic Execution

At THREADS 2014, I demonstrated a new capability of mcsema that enables the use of KLEE, a symbolic execution framework, on software available only in binary form. In the talk, I described how to use mcsema and KLEE to learn an unknown protocol defined in a binary that has never been seen before. In the example, we learned the series of steps required to navigate through a maze. Our competition in the DARPA Cyber Grand Challenge requires this capability — our “reasoning system” will have no prior knowledge and no human guidance, yet must learn to speak with dozens, hundreds, or thousands of binaries, each with unique inputs.

Symbolic Execution

In the first part of this two part blog post, I’ll explain what symbolic execution is and how symbolic execution allows our “reasoning system” to learn inputs for arbitrary binaries. In the second part of the blog post, I will guide you through the maze solving example presented at THREADS. To describe the power of symbolic execution, we are going to look at three increasingly difficult iterations of a classic computer science problem: maze solving. Once I discuss the power of symbolic execution, I’ll talk about KLEE, an LLVM-based symbolic execution framework, and how mcsema enables KLEE to run on binary-only applications.

Maze Solving

One of the classic problems in first year computer science classes is maze solving. Plainly, the problem is this: you are given a map of a maze. Your task is to find a path from the start to the finish. The more formal definition is: a maze is defined by a matrix where each cell can be a step or a wall. One can move into a step cell, but not into a wall cell. The only valid move directions are up, down, left, or right. A sequence of moves from cell to cell is called a path. Some cell is marked as START and another cell is marked as END. Given this maze, find a path from START to END, or show that no such path exists.

An example maze. The step spaces are blank, the walls are +-|, the END marker is the # sign, and the current path is the X's.

An example maze. The step spaces are blank, the walls are +-|, the END marker is the # sign, and the current path is the X’s.

The typical solution to the maze problem is to enumerate all possible paths from START, and search for a path that terminates at END. The algorithm is neatly summarized in this stack overflow post. The algorithm works because it has a complete map of the maze. The map is used to create a finite set of valid paths. This set can be quickly searched to find a valid path.

Maze Solving sans Map

In an artificial intelligence class, one may encounter a more difficult problem: solving a maze without the map. In this problem, the solver has to discover the map prior to finding a path from the start to the end. More formally, the problem is: you are given an oracle that answers questions about maze paths. When given a path, the oracle will tell you if the path solves the maze, hits a wall, or moves to a step position. Given this oracle, find a path from the start to the end, or show there is no path.

The solution to this problem is backtracking. The solver will build the path one move at a time, asking the oracle about the path at every move. If an attempted move hits a wall, the solver will try another direction. If no direction works, the solver returns to the previous position and tries a new direction. Eventually, the solver will either find the end or visit every possible position. Backtracking works because with every answer from the oracle, the solver learns more of the map. Eventually, the solver will learn enough of the map to find the end.

Maze Solving with Fake Walls

Lets posit an even more difficult problem: a maze with fake walls. That is, there are some walls that are really steps. Since some walls are fake, the solver learns nothing from the oracle until it asks about a complete solution. If this isn’t very clear, imagine a map that is made from completely fake walls: for any path, except one that solves the maze, the oracle will always answer “wall.” More formally, the problem now is: given an oracle that will verify only a complete path from the start to the end, solve the maze.

This is vastly more difficult than before: the solver can’t learn the map. The only generic solution is to ask the oracle about every possible path. The solver will eventually guess a valid path, since it must be in the set of all paths (assuming the maze is finite). This “brute force” solver is even more powerful than the previous: it will solve all mazes, map or no map.

Despite its power, the brute force solver has a huge problem: it’s slow and impractical.

Cheat To Win

The last problem is equivalent to the following more general problem: given an oracle that verifies solutions, find a valid solution. Ideally, we want something that finds a valid solution faster than brute force guessing. Especially when it comes to generic problems, since we don’t even know what the inputs look like!

So lets make a “generic problem solver”. Brute force is slow and impractical because it tries every single concrete input, in sequence. What if a solver could try all inputs at once? Humans do this all the time without even thinking. For instance, when we solve equations, we don’t try every number until we find the solution. We use a variable that can stand in for any number, and algorithmically identify the answer.

So how will our solver try every input at once? It will cheat to win! Our solver has an ace up its sleeve: the oracle is a real program. The solver can look at the oracle, analyze it, and find a solution without guessing. Sadly, this is impossible to do for every oracle (because you run into the halting problem). But for many real oracles, this approach works.

For instance, consider the following oracle that declares a winner or a loser:

x = input();
if(x > 5 && x < 9 && x % 4 == 0) {
  winner();
else {
  loser();
}

The solver could determine that the winner input must be a number greater than 5, less than 9, and evenly divisible by 4. These constraints can be turned into a set of linear equations and solved, showing the only winner value is 8.

A hypothetical problem solver could work like this: it will treat input into the oracle as a symbol. That is, instead of picking a specific value as the input, the value will be treated as a variable. The solver will then apply constraints to the symbol that correspond to different branches in the oracle program. When the solver finds a “valid solution” state in the oracle, the constraints on the input are solved. If the constraints can be solved, the result will be a concrete input that reaches the valid solution state. The problem solver tries every possible input at once by converting the oracle into a system of linear equations.

This hypothetical problem solver is real: the part that discovers the constraints is called a symbolic execution framework, and the part that solves equations is called an SMT solver.

The Future Is Now

There are several software packages that combine symbolic execution with SMT solvers to analyze programs. We will be looking at KLEE because it works with LLVM bitcode. We can use KLEE as a generic problem solver to find all valid inputs given an oracle that verifies those inputs. KLEE can solve a maze with hidden walls: Felipe Manzano has an excellent blog post showing how to use KLEE to solve exactly such a maze.

So what does mcsema have to do with this? Well, KLEE works on programs written in LLVM bitcode. Before mcsema, KLEE could only analyze programs that come with source code. Using mcsema, KLEE can be a problem solver for arbitrary binary applications! For instance, given a compiled binary that checks solutions to mazes with hidden walls, KLEE could find all the valid paths through the maze. Or it could do something more useful, like automatically generate application tests with high code coverage, or maybe even find security bugs in binary programs.

But back to maze solving. In Part 2 of this blog post, we’ll take a binary that solves mazes, use mcsema to translate it to LLVM, and then use KLEE to find all valid paths through the maze. More specifically, we will take Felipe’s maze oracle and compile it to a Linux binary. Then, we will use mcsema and KLEE to find all possible maze solutions. Everything will be done without modifying the original binary. The only thing KLEE will know is how to provide input and how to check solutions. In essence, we are going to show how to use mcsema and KLEE to identify all valid inputs to a binary application.

Speaker Lineup for THREADS ’14: Scaling Security

For every security engineer you train, there are 20 or more developers writing code with potential vulnerabilities. There’s no human way to keep up. We need to be more effective with less resources. It’s time to make security a fully integrated part of modern software development and operations.

It’s time to automate.

This year’s THREADS will focus exclusively on automating security. In this single forum, a selection of the industry’s best experts will present previously unseen in-house innovations deployed at major technology firms, and share leading research advances available in the future.

Buy tickets for THREADS now to get the early-bird special (expires 10/13).

DARPA Returns – Exclusive

If you attended THREADS’13, you know that our showcase of DARPA’s Cyber Fast Track was not-to-be-missed. Good news, folks. DARPA’s coming back with a brief of another exciting project, the Integrated Cyber Analysis System (ICAS). ICAS enables streamlined detection of targeted attacks on large and diverse corporate networks. (Think Target, Home Depot, and JPMorgan Chase.)

We’ll hear from the three players DARPA invited to tackle the problem: Invincea Labs, Raytheon BBN, and Digital Operatives. Each group attempted to meet the project goals in a unique way, and will share their experiences and insights.

Learn about it at THREADS’14 first.

World-Class Speakers at THREADS’14

KEYNOTES

Robert Joyce, Chief, Tailored Access Operations (TAO), NSA

As the Chief of TAO, Rob leads an organization that provides unique, highly valued capabilities to the Intelligence Community and the Nation’s leadership.  His organization is the NSA mission element charged with providing tools and expertise in computer network exploitation to deliver foreign intelligence. Prior to becoming the Chief of TAO, Rob served as the Deputy Director of the Information Assurance Directorate (IAD) at NSA, where he led efforts to harden, protect and defend the Nation’s most critical National Security systems and improve cybersecurity for the nation.

Michael Tiffany, CEO, White Ops

Michael Tiffany is the co-founder and CEO of White Ops, a security company founded in 2013 to break the profit models of cybercriminals. By making botnet schemes like ad fraud unprofitable, White Ops disrupts the criminal incentive to break into millions of computers. Previously, Tiffany was the co-founder of Mission Assurance Corporation, a pioneer in space-based computing that is now a part of Recursion Ventures. He is a Technical Fellow of Critical Assets Labs, a DARPA-funded cyber-security research lab. He is a Subject Matter Advisor for the Signal Media Project, a nonprofit promoting the accurate portrayal of science, technology and history in popular media. He is also a Ninja.

LEADING RESEARCH

Smten and the Art of Satisfiability-based Search
Nirav Dave, SRI

Reverse All the Things with PANDA
Brendan Dolan-Gavitt, Columbia University

Code-Pointer Integrity
Laszlo Szekeres, Stony Brook University

Static Translation of X86 Instruction Semantics to LLVM with McSema
Artem Dinaburg & Andrew Ruef, Trail of Bits

Transparent ROP Detection using CPU Performance Counters
Xiaoning Li, Intel & Michael Crouse, Harvard University

Improving Scalable, Automated Baremetal Malware Analysis
Adam Allred & Paul Royal, Georgia Tech Information Security Center (GTISC)

Integrated Cyber Attribution System (ICAS) Program Brief
Richard Guidorizzi, DARPA

TAPIO: Targeted Attack Premonition using Integrated Operational Data Sources
Invincea Labs

Gestalt: Integrated Cyber Analysis System
Raytheon BBN

Federated Understanding of Security Information Over Networks (FUSION)
Digital Operatives

IN-HOUSE INNOVATIONS

Building Your Own DFIR Sidekick
Scott J Roberts, Github

Operating system analytics and host intrusion detection at scale
Mike Arpaia, Facebook

Reasoning about Optimal Solutions to Automation Problems
Jared Carlson & Andrew Reiter, Veracode

Augmenting Binary Analysis with Python and Pin
Omar Ahmed, Etsy & Tyler Bohan, NYU-Poly

Are attackers using automation more efficiently than defenders?
Marc-Etienne M.Léveillé, ESET

Making Sense of Content Security Policy (CSP) Reports @ Scale
Ivan Leichtling, Yelp

Automatic Application Security @twitter
Neil Matatall, Twitter

Cleaning Up the Internet with Scumblr and Sketchy
Andy Hoernecke, Netflix

CRITs: Collaborative Research Into Threats
Michael Goffin, Wesley Shields, MITRE

GitHub AppSec: Keeping up with 111 prolific engineers
Ben Toews, GitHub

Don’t miss out. Buy tickets for THREADS now to get the early-bird special (expires 10/13). You won’t find a more comprehensive treatment of scaling security anywhere else.

 

We’re Sponsoring the NYU-Poly Women’s Cybersecurity Symposium

NYU-Poly Women's Cybersecurity Symposium

Cyber security is an increasingly complex and vibrant field that requires brilliant and driven people to work on diverse teams. Unfortunately, women are severely underrepresented and we want to change that. Career Discovery in Cyber Security is an NYU-Poly event, created in a collaboration with influential men and women in the industry. This annual symposium helps guide talented and interested women into careers in cyber security. We know that there are challenges for female professionals in male-dominated fields, which is why we want to create an environment where women have the resources they need to excel.

The goal of this symposium is to showcase the variety of industries and career paths in which cyber security professionals can make their mark. Keynote talks, interactive learning sessions, and technical workshops will prepare participants to identify security challenges and acquire the skills to meet them. A mentoring roundtable, female executive panel Q&A session, and networking opportunities allow participants to interact with accomplished women in the field in meaningful ways. These activities will give an extensive, well-rounded look into possible career paths.

Trail of Bits is a strong advocate for women in the cyber security world at all stages of their careers. In the past, we were participants in the CSAW Summer Program for Women, which introduced high school women to the world of cyber security. We are proud of our involvement in this women’s symposium from its earliest planning stages, continue to offer financial support via named scholarships for attendees, and will take part in the post-event mentoring program.

This year’s symposium is Friday and Saturday, October 17-18 in Brooklyn, New York. For more details and registration, visit the website. Follow the symposium on Twitter or Facebook for news and updates.