Hunting: Recent news and your proxy logs

If you have been reading any of the blog posts or the twitter verse the past few months you might have seen reference to some Adobe Flash and Microsoft 0-days being used and maybe wondering if you have been a victim. Using nuggets of open source  intelligence like info from these posts by ShadowServer and  Sophos you just might have enough  data to search your proxy logs and see if you have experienced any of these attacks. You do have your web traffic going through a proxy don’t you? If you don’t you should start doing it and start collecting the logs. The logs are a treasure trove of information if you want to go hunting.

So let’s go on a hunt.

For this hunt like previous onces I will be using ArcSight Logger because that is what I have access to. I will also be leveraging websense for the proxy logs as well.

Indicators from the ShadowServer and Sophos blogs:


Logger Query:

deviceVendor = “Websense” AND  ( ( ( requestUrlFileName CONTAINS “deploy.html” )  OR  ( requestUrlFileName CONTAINS “deployJava.js” )  OR  ( requestUrlFileName CONTAINS “movie.swf” )  OR  ( requestUrlFileName CONTAINS “BrightBalls.swf” )  OR  ( requestUrlQuery CONTAINS “Elderwood=” )  OR  ( requestUrlQuery CONTAINS “apple=” ) ) )

Now this isn’t the most efficient query because of the CONTAINS operators but there is a trade off when doing searches like this and you just have to be prepared for it. There could also be some false positives as well espically around faq.htm so be prepared to use a  little excel foo. Or you can leave the faq.htm off I will leave that up to you. I did a quick hunt and went back 2 weeks and got the following hits:

Notice the swf file is different but the query was for Elderwood.  If I had just put  BrightBalls.swf?Elderwood as a search parameter I would have missed it. Good thing the bad guys used the same query string.  So do some experimenting you might find slicing and dicing on key terms will get you more data and more places to keep hunting.

Now for me this showed a sign I needed to do more digging and going for some packets to review if you have them and perhaps another search this time adding Geoffrey.swf as a search parameter to see if there is anything additional there.

As always happy hunting!

Using CIF to create content for ArcSight – Part 2

In my previous post Using CIF to create content for ArcSight – Part 1 I quickly went over how to populate an active list with data from CIF. Now we are going to take it a step further and start monitoring for hits on that active list and generate some other content for ArcSight.  This post is quite long so the TL:DR version

Create active list for monitoring, create rule to populate the active list when a domain query matches a domain you are monitoring, create active channel/data monitor to watch for events.


This is a very basic active list/correlation rule example you can do much more but this should be a decent starting point.

Also please note for this example I am using ISC Bind DNS logs so the query field gets mapped to the DeviceCustomString4 field in ArcSight

First let’s create another active list :

In the Navigation Panel go to Active Lists and right click your personal folder and select New Active List

New Active List

Next in the Inspect/Edit Panel modify the Active List to meet your needs but in this example it will have the name “Suspicious Traffic”, it will expire entries in 3 days if an entry is not updated, 10,000 entries will  be allowed. These are all changeable fields after the active list is created.

Now set the fields the Active List will use (These can not be changed after the list has been created). I have entered:
Attacker Address (Key Field), Target Address, Target Port, Domain, Category Outcome, Description, Source

Now let’s create a filter so that we can match the events we are looking for:

New Filter

I gave the filter the name “Discover Malicious Domain Lookup” and started with creating the following conditions

Filter Conditions before Active List

I then add an InActiveList Condition

Add In Active List Condition

Then I added a condition that checks if Device Custom String 4 (The BIND DNS Query field) matches the Domain Name filed in the Malicious Domains Active List

Matches Malicious Domain

The filter should look something like this:

Full Filter

Click apply to create the filter and move on to creating the Correlation Rule:

For this example we will be firing on every hit which can be noisy. You might want to tune for your environment and capabilities but this should be a good starting point.

Start by creating your rule

New Rule

Give it a name then click on the conditions tab. Right click and select matches filter:

Matches Filter Condition

Select the filter we created above:

Selecting your filter

Next we will build some local variable that will help with populating the Suspicious Traffic Watchlist we created. Click the Local Variables Tab.

Click the + button and add a new Local Variable select List from the Categories Window on the Left and the GetActiveListValue in the functions window.

Setting the GetActiveListVaule Function

Give the variable a name I chose getDomainWatchList and map Domain Name to Device Custom String4. Click Ok and create another local variable

Setting the getDomainWatchList Local Variable

Next we create a variable using the String Categories and the Concatenate function:

String Category Concat funftion

The variable name is getWatchlistSource and it’s settings are below.  The first string argument source is from the getDomainWatchList variable we just created above. The second string argument is blank for this rule variable because we are only matching on this one source.

Get WatchList Source

Next create one more variable using the Concatenate function caused getWatchListDescription. The first string argument source is from the getDomainWatchList variable we just created above. This two has a blank string argument for the Concatenate function.

Get Watch List Description Variable

Now that all of our variables are set we need to make sure they are aggregated so they get populated when a rule hits. Click on the aggregation tab and under the Aggregate  only  if these fields are identical and click the Add button. Select the variables getWatchListDescription and getWatchListSource we just created. Click ok and let’s get to working on the actions for the rule.

Aggregation Fields to add

First we need to set a few fields that we will use to populate the event created when the rule fires. Deactivate the On First Event Action and enable the On Every Event Action then right click and Select Add -> Set Event Field .  Let’s use Flex String 1 and Flex String 2 for that purpose and use the variables we created above and click OK.

Set Flex String Event Fileds

We are almost done I promise 🙂
Now we need to add any systems that match the rule to the  Suspicious Traffic Active List we created at the beginning of this post. Right Click the On Every Event Action then right click and Select  Add -> Active List -> Add to Active List below are the mappings for the fields in the Active List.

Add to Active List Action

Now click ok and then go back to the aggregation tab and add the following fields so that your fields match the ones below:

Aggregation Fields

Now we are ready to apply all the conditions for the rule and deploy it as a real time rule. Once it is deployed as a rule you can try and generate some test hits by doing lookups of domains in the active list against the server creating the DNS logs. Just make sure you know the address of your system :). To monitor for events create an Active Channel where the filter is Name = Discover Malicious Domains or whatever name you gave the rule above. If everything works you should soon see events in your active channel.

Active Channel Test Hit

You can then look at the entries for your Suspicious Traffic Active List and you should see entries in there that match the results in the active channel.

Active List Entries

Now you can start to have some fun and create dashboard with data monitors around these type of events below is a screen shot of a sample dashboard. The top half is of an event graph and the bottom is the top bucketized count of malicious domains queried.

Sample Data Monitors

Now the rest of the content creation is up to you but hopefully this gets your juices flowing and you come up with some other great use cases for CIF related data. As always happy hunting!

Using CIF to create content for ArcSight – Part 1

If you use ArcSight hopefully by now you have come across the great ArcOSI Project for generating content for use within ArcSight. I have used it in the past and liked it but I found myself having to look for more context around the alerts it generated. I recently came across the Collective Intelligence Framework (CIF) and really like how many intel sources it aggregates like ArcOSI does and how it stores the data from the intel source and I think this too can be a great source of content for ArcSight. I have previously blogged about integrating CIF and ArcSight, but that was just using CIF as a tool for looking up data with in ArcSight not using CIF to create content to be used by ArcSight.

EDIT: 6/10/2012 if you haven’t seen @kylemaxwell ‘s Post Introduction to the Collective Intelligence Framework I highly recommend check it out!


I think the content CIF can provide could be great for ActiveLists and Correlation rules on those active lists. I came up with a few possible scenarios on how this content could be used:

  • Malicious Domain Queries – DNS Logs
  • Malicious Domain Web Traffic – Proxy Logs
  • Malicious IP Traffic – Firewall/Proxy Logs
  • Scanner Traffic – SSH/Firewall Logs

For the Scanner Traffic maybe instead of reporting on the noise of someone knocking on your door, you report on any traffic that was accepted (meaning authentication happened) but that is up to you.

I have been working on a python script that assumes you are using the CIF Perl client to generate feed data in csv format, then the script will parse the csv files and send them like ArcOSI does to ArcSight via  CEF over syslog. I have posted the script and a quick tutorial on it over at the Google Code Project cif-csv-parse-to-cef.

A quick example for this post will be to generate the domain/malware feed using the medium severity and confidence level of 85, send it to ArcSight and have it add the feed data to an Active List. Part 2 of this post will cover writing a correlation rule to monitor the Active List for actionable data.

Let’s start by first creating the Active List and the Correlation Rule to populate the Active List:

In the Navigation Panel go to Active Lists and right click your personal folder and select New Active List

New Active List

Next in the Inspect/Edit Panel modify the Active List to meet your needs but in this example it will have the name “Malicious Domains”, it will not expire, 100,000 entries allowed (these settings can be changed later) Now set the fields the Active List will use. I have entered:
Domain, Source, Confidence, Description

Active List Edit Panel

Click Apply and all that is left is to create the correlation rule to populate the Active List.

New Rule

Next add a name for your rule

Rule Name

Then click on the Conditions field and create the following filter.

Rule Conditions

Next click on the Actions tab and make sure you De-Activate the Trigger for On First Event| Action. Then activate the On Every Event Trigger

Deactivate Trigger

After activating the On Every Event Trigger right click and select Add -> Active List -> Add to Active List

Select the active list you previously created in this case. Malicious Domains

Select Active ListAfter selecting the Active List you will have to map ArcSight event Fields to the corresponding Active list fields.

Active List Action

Once you click Ok, you will most likely get a pop up message similar to this that asks if you want to add all the ArcSight Fields you mapped in the previous step to the aggregation tab. Click yes, if you don’t then your active list will be blank after the rule fires.

Aggregation Question

Now deploy the rule as a real time rule. Your account will need privileges to do that, If you don’t have them ask your ArcSight Admin to deploy the rule for you.

Now the rule and active list have been created let’s generate content for the rule to populate the active list with.

Let’s start by generating the csv:

$ cif -q domain/malware -s medium -c 85 -p csv > dom_malware.csv

Now run the script

$./ -f dom_malware.csv -s -p 514 -t Domain

You will see output on the screen similar to this

<29>CEF:0|CIF|CIF 0.1|100|1|CIF Malicious Domain|1| cs1Label=Source cs2=85 cs2Label=ConfidenceLevel cs3=malware cs3Label=Description

Now if you have an Active Channel up and running with a filter for Device Vendor = CIF and  Name = CIF Malicious Domain you should see something similar to this.

CIF Active Channel

Now if you right click your active list and show entries you should also see that your Active List is being populated with data.

Populated Active List

This concludes Part 1 – Part 2 will cover writing a correlation rule to monitor the Active List for actionable data.

Happy Hunting!

CIF Integration with ArcSight

I have been playing with and testing the Collective Intelligence Framework (CIF) and after seeing these great posts by Martin Holste and Brad Shoop on integrating CIF into ELSA and Splunk I got motivated to do the same thing with the ArcSight ESM console. EDIT: 6/10/2012 if you haven’t seen @kylemaxwell ‘s Post Introduction to the Collective Intelligence Framework I highly recommend check it out!

There are several steps to integrating CIF with arcsight before you start make sure you know the following:

  • CIF API Key
  • CIF API url

To start off you will need to go the Integration Commands Navigation Panel and right click your personal Integration Commands Folder  and select New Command.

New Integration Command

From there in the Inspect/Edit Panel you will want to select URL as the type of command:

Selecting New Command Type URL

You will want to give the new command a name for this example I used CIF (orginal huh?) you then can double click the URL field and you will want to put in the CIF API url and your CIF API key. If you notice the screenshot below you will also see $selectedItem, this is the field that will get populated by what you select in the ArcSight Console. Once you have your API url, key and $selectedItem set you can click OK then click Apply.

You are now halfway there :). Your command is now set you need to set the Integration Command Configuration. In the Navigator Panel Click the Configuration Tab and right click your Integration Configuration folder and select New Configuration.

New Integration Configuration

In the Inspect/Edit Panel you now can select the name of the Configuration and if it  will be rendered in an internal or external browser

Configuration Name and Browser

Next click on the Context Tab and select what ArcSight Contexts you want to see this CIF Search enabled for. In this example I have selected the Editor and Viewer Locations. You may choose others so play around with and see what works best for you.

Config Context Examples

Next click the Commands tab Click Add and select the Command you created earlier. In this case the CIF command.

Selecting Config Command

Click OK then Apply and your Integration Command is ready to go.

Fire up an Active Channel that you use and select either an IP Address Field or Host Name Field and Right Click and Select Integration Commands – > CIF, In the example below we have Right Clicked an Attacker Host Name and selected the CIF Integration Command.

Selecting Integration

In a few seconds your external browser (If that is what you chose and had configured) should load and you should see something similar to this:

CIF Query in External Browser

Now you are done and you have CIF integrated in your ArcSight Console. If you have tried out Martin’s CIF-REST-Sphinx addon you could configure that as an Integration Command as well.

Now go have fun hunting!

Hunting: Finding lateral movement using Snare and ArcSight Logger

Once again I received  inspiration for this post from the Mandiant M-Trends 2012: An Evolving Threat report and reflecting on a previous work engagement where the attackers leveraged lateral movement  to move around and  deeper into the network. On page 12 the report highlights the attackers leveraging at.exe (task scheduler) to install malware and take control of systems. This post hopefully will help you get an idea of what your current scheduled tasks look like and get you thinking about ways to find badness when it occurs. Yes it will occur!

In the M-Trends example the attacker creates a a NetBios session first  and then runs the at.exe command which to schedule the malware they previously uploaded over the NetBios session. This two steps should create some events in the Windows event logs, assuming auditing is turned on. For the NetBios connection an event id of 540 for XP and Server 2003 systems should be created and an event id of 4624 for Vista and Server 2008 systems.  For the at.exe an event id of 602 for XP and Server 2003 systems  and an event id of 4702  for Vista and Server 2008 systems.

To get to the point where you can actually hunt for at.exe events you must do a little leg work.

Set the audit policy:

At a minimum you need to have a few things turned on in the Local Security Policy for the Auditing Policy. You will want to enable Success and Failures audits for the following audit policies.

  • Audit account logon events
  • Audit logon events
  • Audit Object Access

It should be noted if possible you should turn on as many of the available audit policies as you can for your environment where you can. Having these logs helps find not only badness but misconfiguration and other issues that might creep up. They are easy to setup and push out via Group Policy Objects (GPO) but just make sure you watch changes to the GPO in your logs. Some attackers have been known to modify GPO’s and turn settings off.

Now that you have your Auditing Policy in place you need to enable logging.

Enabling Logging:

For central windows logging that should work with almost any commercial or open source central log collection tool  I recommend using Snare as your agent for getting the logs from your windows systems to what ever central log system you have. You do have one right?

The install for Snare is pretty straight forward and is covered pretty well in their documentation, and so is adding a remote Syslog host so I won’t cover that here. What I will cover is one minor addition that I have found that needs to be made to capture and send  at.exe related event logs. Start by logging in to the Snare Configuration page and select Objectives Configuration on the left hand side. When editing the auditing configuration here is what I have used in my testing to get the logs I am interested in for this hunting trip:

Sample Snare Config

After adding the configuration above go to the left navigation bar and select Apply the Latest Audit Configuration. Now you may want to test and try and create a at.exe event on the system you just applied this configuration to. To test it you will need to either be a domain admin or a local admin of the system. A sample test you could use from the command prompt is:

at.exe \\srv1 07:30 cmd /c ping.exe

Change srv1 with your host name and change what is after the cmd /c to something you want the system to execute. The command above will create a task to run at 7:30 in the morning and will execute a ping to

Now go back to your Snare Configuration and look at the Latest Events and you should see near the top the scheduled task you just created.

Hunting with Logger

Now that you have configured the audit policy and you have configured Snare. Its time to go hunting for the logs. In this hunt we are using the free version of  ArcSight Logger (in future posts I will explore using Snare, ELSA and maybe a few other tools). I am going to assume your logger instance is already setup and you have a smart connector in place to receive logs from Snare.

Quick Initial Search

A quick and dirty search for looking for scheudled tasks is as simple as the filter below and hitting Go!:

Logger Search Filter

Search: (externalId=620 or externalId=4702)

Now if you have any hits you might get output similar to this:

Logger Search Results

Now that you have results you should probably go make and do a search for network logon events (event id 540 or 4624) around those times to see where the commands originated from. This will help you find lateral movement.

What to do after your initial search?

We have found our initial search and hopefully all of your events are ones that were planned and not ones done by someone on your network. Perhaps you don’t want to run this query every day or so or maybe you don’t want to login every day to run it. You could quickly turn this into a report and have it run for a set interval and email you the results. I will quickly cover how to create the query and report that is needed on logger below.

Creating the Query:

Let’s create a quick and dirty query and report that can be touched up later if needed 🙂

Under the Reports Function tab on the left had side of Logger. Go to Design and select Queries then click Add New at the top. Give it a name and you could start by using a query similar to this:

select events.arc_endTime AS ‘Time’,events.arc_name AS ‘Name’,events.arc_destinationUserName As ‘Dest. User Name’, events.arc_destinationHostName AS ‘Dest. Host Name’, events.arc_message AS ‘Message’ from events where ( events.arc_externalId = 602 OR events.arc_externalId = 4702) group by events.arc_endTime

Below is what my quick Query Object looks like:

Logger Query

After creating the query you will need to create the report simply give it a name, select the query you just created and then select the fields you want displayed. Save the report and run it. Here is what my quick and dirty report design looks like.

Logger Report

Depending on your Report Start and End Times you might get data similar to your quick logger search above.

Report output

You should now have a way to hopefully find badness if there was any (assuming you have historical logs) or this can help put you on a way to monitor and find badness and respond faster.

If you have other suggestions or maybe tricks you use for these types of searches I would love to see them. We are all one big community lets help each other out where we can.

As always Happy Hunting!

Hunting: Internal DNS Logs using ArcSight Logger

If you have read the latest Mandiant M-Trends 2012: An Evolving Threat report you might have noticed on page 10 this statement:

The ZIP archive contained several benign files and an executable disguised as a PDF document via a modified resources section. When executed, the malware beaconed to a domain that contained the organization’s specific name as the third level of the address (such as “”).

Then later in the report the Mandiant Folks call out the need for having your internal DNS logs as a way to combat these type of attacks. This got me thinking of how I could go hunting through my organizations internal DNS logs. Thankfully, we have these logs and they are  being forwarded to an ArcSight Logger so for this post I am going to leverage Logger for searching internal dns logs.

Let’s assume for this exercise your organization name is LMN Widget Maker Inc and you are customarily known as lmnwidgets and in the Mandiant example above the malware would have beaconed to

For this exercise I am using BIND DNS for the logs so your queries might have to change for Microsoft DNS but you should get the idea. For the sake of it as well I will show the results with a limited field set so you only see the important data for this exercise.

You will need to search query events and you will want to exclude queries for your organizations domain From there you will have to leverage the capabilities of ArcSight and do a CONTAINS operator in the search for lmnwidgets. Your search filter would look something like this:

Logger Search for lmnwidget

And for those of you used to creating filters in ESM it would look like this:

LMNWidget Search ESM

This hopefully would not result in any events but in this exercise it did.

LMNWIDGET Search Results

Now that you have found you have some interesting results from your searches you can dig a little deeper and take it from there.

Happy Hunting!!!!