Using CIF to create content for ArcSight – Part 1

If you use ArcSight hopefully by now you have come across the great ArcOSI Project for generating content for use within ArcSight. I have used it in the past and liked it but I found myself having to look for more context around the alerts it generated. I recently came across the Collective Intelligence Framework (CIF) and really like how many intel sources it aggregates like ArcOSI does and how it stores the data from the intel source and I think this too can be a great source of content for ArcSight. I have previously blogged about integrating CIF and ArcSight, but that was just using CIF as a tool for looking up data with in ArcSight not using CIF to create content to be used by ArcSight.

EDIT: 6/10/2012 if you haven’t seen @kylemaxwell ‘s Post Introduction to the Collective Intelligence Framework I highly recommend check it out!


I think the content CIF can provide could be great for ActiveLists and Correlation rules on those active lists. I came up with a few possible scenarios on how this content could be used:

  • Malicious Domain Queries – DNS Logs
  • Malicious Domain Web Traffic – Proxy Logs
  • Malicious IP Traffic – Firewall/Proxy Logs
  • Scanner Traffic – SSH/Firewall Logs

For the Scanner Traffic maybe instead of reporting on the noise of someone knocking on your door, you report on any traffic that was accepted (meaning authentication happened) but that is up to you.

I have been working on a python script that assumes you are using the CIF Perl client to generate feed data in csv format, then the script will parse the csv files and send them like ArcOSI does to ArcSight via  CEF over syslog. I have posted the script and a quick tutorial on it over at the Google Code Project cif-csv-parse-to-cef.

A quick example for this post will be to generate the domain/malware feed using the medium severity and confidence level of 85, send it to ArcSight and have it add the feed data to an Active List. Part 2 of this post will cover writing a correlation rule to monitor the Active List for actionable data.

Let’s start by first creating the Active List and the Correlation Rule to populate the Active List:

In the Navigation Panel go to Active Lists and right click your personal folder and select New Active List

New Active List

Next in the Inspect/Edit Panel modify the Active List to meet your needs but in this example it will have the name “Malicious Domains”, it will not expire, 100,000 entries allowed (these settings can be changed later) Now set the fields the Active List will use. I have entered:
Domain, Source, Confidence, Description

Active List Edit Panel

Click Apply and all that is left is to create the correlation rule to populate the Active List.

New Rule

Next add a name for your rule

Rule Name

Then click on the Conditions field and create the following filter.

Rule Conditions

Next click on the Actions tab and make sure you De-Activate the Trigger for On First Event| Action. Then activate the On Every Event Trigger

Deactivate Trigger

After activating the On Every Event Trigger right click and select Add -> Active List -> Add to Active List

Select the active list you previously created in this case. Malicious Domains

Select Active ListAfter selecting the Active List you will have to map ArcSight event Fields to the corresponding Active list fields.

Active List Action

Once you click Ok, you will most likely get a pop up message similar to this that asks if you want to add all the ArcSight Fields you mapped in the previous step to the aggregation tab. Click yes, if you don’t then your active list will be blank after the rule fires.

Aggregation Question

Now deploy the rule as a real time rule. Your account will need privileges to do that, If you don’t have them ask your ArcSight Admin to deploy the rule for you.

Now the rule and active list have been created let’s generate content for the rule to populate the active list with.

Let’s start by generating the csv:

$ cif -q domain/malware -s medium -c 85 -p csv > dom_malware.csv

Now run the script

$./ -f dom_malware.csv -s -p 514 -t Domain

You will see output on the screen similar to this

<29>CEF:0|CIF|CIF 0.1|100|1|CIF Malicious Domain|1| cs1Label=Source cs2=85 cs2Label=ConfidenceLevel cs3=malware cs3Label=Description

Now if you have an Active Channel up and running with a filter for Device Vendor = CIF and  Name = CIF Malicious Domain you should see something similar to this.

CIF Active Channel

Now if you right click your active list and show entries you should also see that your Active List is being populated with data.

Populated Active List

This concludes Part 1 – Part 2 will cover writing a correlation rule to monitor the Active List for actionable data.

Happy Hunting!


Hunting: Finding lateral movement using Snare and ArcSight Logger

Once again I received  inspiration for this post from the Mandiant M-Trends 2012: An Evolving Threat report and reflecting on a previous work engagement where the attackers leveraged lateral movement  to move around and  deeper into the network. On page 12 the report highlights the attackers leveraging at.exe (task scheduler) to install malware and take control of systems. This post hopefully will help you get an idea of what your current scheduled tasks look like and get you thinking about ways to find badness when it occurs. Yes it will occur!

In the M-Trends example the attacker creates a a NetBios session first  and then runs the at.exe command which to schedule the malware they previously uploaded over the NetBios session. This two steps should create some events in the Windows event logs, assuming auditing is turned on. For the NetBios connection an event id of 540 for XP and Server 2003 systems should be created and an event id of 4624 for Vista and Server 2008 systems.  For the at.exe an event id of 602 for XP and Server 2003 systems  and an event id of 4702  for Vista and Server 2008 systems.

To get to the point where you can actually hunt for at.exe events you must do a little leg work.

Set the audit policy:

At a minimum you need to have a few things turned on in the Local Security Policy for the Auditing Policy. You will want to enable Success and Failures audits for the following audit policies.

  • Audit account logon events
  • Audit logon events
  • Audit Object Access

It should be noted if possible you should turn on as many of the available audit policies as you can for your environment where you can. Having these logs helps find not only badness but misconfiguration and other issues that might creep up. They are easy to setup and push out via Group Policy Objects (GPO) but just make sure you watch changes to the GPO in your logs. Some attackers have been known to modify GPO’s and turn settings off.

Now that you have your Auditing Policy in place you need to enable logging.

Enabling Logging:

For central windows logging that should work with almost any commercial or open source central log collection tool  I recommend using Snare as your agent for getting the logs from your windows systems to what ever central log system you have. You do have one right?

The install for Snare is pretty straight forward and is covered pretty well in their documentation, and so is adding a remote Syslog host so I won’t cover that here. What I will cover is one minor addition that I have found that needs to be made to capture and send  at.exe related event logs. Start by logging in to the Snare Configuration page and select Objectives Configuration on the left hand side. When editing the auditing configuration here is what I have used in my testing to get the logs I am interested in for this hunting trip:

Sample Snare Config

After adding the configuration above go to the left navigation bar and select Apply the Latest Audit Configuration. Now you may want to test and try and create a at.exe event on the system you just applied this configuration to. To test it you will need to either be a domain admin or a local admin of the system. A sample test you could use from the command prompt is:

at.exe \\srv1 07:30 cmd /c ping.exe

Change srv1 with your host name and change what is after the cmd /c to something you want the system to execute. The command above will create a task to run at 7:30 in the morning and will execute a ping to

Now go back to your Snare Configuration and look at the Latest Events and you should see near the top the scheduled task you just created.

Hunting with Logger

Now that you have configured the audit policy and you have configured Snare. Its time to go hunting for the logs. In this hunt we are using the free version of  ArcSight Logger (in future posts I will explore using Snare, ELSA and maybe a few other tools). I am going to assume your logger instance is already setup and you have a smart connector in place to receive logs from Snare.

Quick Initial Search

A quick and dirty search for looking for scheudled tasks is as simple as the filter below and hitting Go!:

Logger Search Filter

Search: (externalId=620 or externalId=4702)

Now if you have any hits you might get output similar to this:

Logger Search Results

Now that you have results you should probably go make and do a search for network logon events (event id 540 or 4624) around those times to see where the commands originated from. This will help you find lateral movement.

What to do after your initial search?

We have found our initial search and hopefully all of your events are ones that were planned and not ones done by someone on your network. Perhaps you don’t want to run this query every day or so or maybe you don’t want to login every day to run it. You could quickly turn this into a report and have it run for a set interval and email you the results. I will quickly cover how to create the query and report that is needed on logger below.

Creating the Query:

Let’s create a quick and dirty query and report that can be touched up later if needed 🙂

Under the Reports Function tab on the left had side of Logger. Go to Design and select Queries then click Add New at the top. Give it a name and you could start by using a query similar to this:

select events.arc_endTime AS ‘Time’,events.arc_name AS ‘Name’,events.arc_destinationUserName As ‘Dest. User Name’, events.arc_destinationHostName AS ‘Dest. Host Name’, events.arc_message AS ‘Message’ from events where ( events.arc_externalId = 602 OR events.arc_externalId = 4702) group by events.arc_endTime

Below is what my quick Query Object looks like:

Logger Query

After creating the query you will need to create the report simply give it a name, select the query you just created and then select the fields you want displayed. Save the report and run it. Here is what my quick and dirty report design looks like.

Logger Report

Depending on your Report Start and End Times you might get data similar to your quick logger search above.

Report output

You should now have a way to hopefully find badness if there was any (assuming you have historical logs) or this can help put you on a way to monitor and find badness and respond faster.

If you have other suggestions or maybe tricks you use for these types of searches I would love to see them. We are all one big community lets help each other out where we can.

As always Happy Hunting!

Hunting: Internal DNS Logs using ArcSight Logger

If you have read the latest Mandiant M-Trends 2012: An Evolving Threat report you might have noticed on page 10 this statement:

The ZIP archive contained several benign files and an executable disguised as a PDF document via a modified resources section. When executed, the malware beaconed to a domain that contained the organization’s specific name as the third level of the address (such as “”).

Then later in the report the Mandiant Folks call out the need for having your internal DNS logs as a way to combat these type of attacks. This got me thinking of how I could go hunting through my organizations internal DNS logs. Thankfully, we have these logs and they are  being forwarded to an ArcSight Logger so for this post I am going to leverage Logger for searching internal dns logs.

Let’s assume for this exercise your organization name is LMN Widget Maker Inc and you are customarily known as lmnwidgets and in the Mandiant example above the malware would have beaconed to

For this exercise I am using BIND DNS for the logs so your queries might have to change for Microsoft DNS but you should get the idea. For the sake of it as well I will show the results with a limited field set so you only see the important data for this exercise.

You will need to search query events and you will want to exclude queries for your organizations domain From there you will have to leverage the capabilities of ArcSight and do a CONTAINS operator in the search for lmnwidgets. Your search filter would look something like this:

Logger Search for lmnwidget

And for those of you used to creating filters in ESM it would look like this:

LMNWidget Search ESM

This hopefully would not result in any events but in this exercise it did.

LMNWIDGET Search Results

Now that you have found you have some interesting results from your searches you can dig a little deeper and take it from there.

Happy Hunting!!!!