Wednesday, December 7, 2011

Data Validation Framework – HDIV at a Glance


Security study has again proved that most of the web application security attacks (approx 85% as per Gartner and NIST) are generated from application layer. It has always been a challenge for developers to validate parameters in URL, HTTP header, HTP request and non-editable fields on the page. We also see many irregularities in fixing the parameter manipulation vulnerability i.e. when an attacker accessing data of other users/on behalf of other users. The traditional solution suggested and implemented is to map the user role with some hidden variable and then validating it on server side. But this solution doesn’t work well for everyone.

A JAVA Web Application Security Framework i.e. HDIV (HTTP Data Integrity Validator) is out for JAVA applications using Struts 1.x, Struts 2.x, Spring MVC and JSTL. The framework guarantees integrity, confidentiality and protection against CSRF attack. The framework divides HTTP request in two parts:

·         Editable Data – Includes textbox and textarea
·         Non-editable Data – Includes links, hidden fields, combo values, radio, buttons, destiny pages, cookies, etc.)

Working

HDIV appends a State parameter (_HDIV_State) with random token value for every request sent to the client. This token value is calculated based on chosen working strategy i.e. Hash/Cipher/Memory. The token validates all non-editable page contents on server side. A HDIV HTTP request looks like:


More importantly, HDIV can also hide/mask the original value of parameter. Let’s say we have an application in which admin user role is recognized by a parameter role=1001; now after implementing HDIV the parameter value will look like role=1 (any random value). This technique prevents attacker to guess original value of the parameter.

HDIV also allows using custom validations for input fields on a page which are configured in XML format. Move over, HDIV installation does not affect your application configuration.

Performance

Performance is the most important criteria when we chose to implement something extra to our application/server. As expected, HDIV also eats some extra megabytes of your server memory space but if you have a decent configuration server the response time is unnoticeable. HDIV performance depends on chosen working strategy.

Hope this helps your developer to fix many of your web application security problems.

Happy Reading!!!

Thursday, October 13, 2011

Pentesting Thick Client Apps


Pentesting thick client applications is not a new concept instead the techniques adopted are new and interesting. I’m a bit lazy on explaining what thick client apps are, please refer here for more info. GTalk, Pidgin, Skype, MSN are few examples of thick client applications. These days many financial institutions are adapting the technology for internal transaction purposes.

The Challenges:
  • Typical thick client apps do not communicate over HTTP/HTTPS (some of them do); so you cannot intercept traffic with regular web proxy tools
  • Unknown modification to registry/system files
  • Unknown technical details of architecture
  • Manipulating client-server communication over the wire
  • Encryption techniques adopted by client software
  • Will used by only trusted users
 Above list just mentions a few challenges that we regularly face while pentesting thick clients.

Way To Go:

Understanding Architecture
  
Thick client applications are generally 2-tier applications, meaning, the request is constructed at user’s end (client) and sent to the server for processing. There is no web server or middle ware technology sitting in middle; it directly communicates to the database. This can be identified by observing the time lapse between request and response or analyzing the communication traffic on wireshark.

Architecture can also be hybrid i.e. listening on both HTTP/S and any unknown port. In this case we may have to use the combination of tools to intercept and modify the communication. 

Intercepting/Manipulating Client-Server Communication

The two most popular open source tools are EchoMirage and ITR. The tool I prefer the most is Echomirage; because of its simplicity. It directly hooks with your client executable and starts intercepting traffic on the go. There is also an option to hook your client exe with its associated process. Here’s how you can do the above steps:


After you do this successfully, all our traditional application security checks are applicable. If you are lucky, you may see SQL queries passing through our Echomirage Interceptor.

Local Storage of Sensitive Information

Sensitive information can be clear-text passwords, server configuration, user personal detail, user financial detail, etc. Look for .ini, ,cfg, dat, .log files in application folder for application related sensitive information. Generally, you will find server configuration in .ini files.

File and Registry Modification Analysis/Reverse Engineering

Another two most popular tools are Filemon and Regmon from sysinternals. These both tools are now packaged into one as Process Monitor. These tools identify files accessed or registry modified when you double click your client executables. Here, you need to look for interesting files and investigate further; filename can hint which file you should investigate. It will help in application reverse engineering.

Regmon list all registry entries which are accessed when you double click your client executables. Use the registry search feature to find keywords, passwords, and sensitive information.

Happy Reading!!!

Thursday, September 8, 2011

Malware Attack Analysis


Recently, we have seen a massive increase in malware attacks. Hackers find weak holes (vulnerability) in system or application, exploit them to gain access and ends up infecting them with malware. The attack is usually targeted for huge set of audience i.e. website legitimate users. Malware can be spread by various means, an email attachment, file download, javascript via page load, broken links, page redirects, etc.

In a recent Malware analysis activity, I noticed hacker adopting different approach to infect website. Hacker exploited weak ftp credential to gain web folder access and infected application supported files i.e. js, cs, html files instead of infecting main application pages. The malicious script executes on user’s browser, gives an unavoidable pop-up of Microsoft Essentials and then pop-up disappears after user clicks on “OK” button. This script also executed a function from within the page which silently transferred all valid sessions opened in same browser tabs to attacker’s website. Eventually, compromising user account by session hijacking. Think of less secured website which sends authentication credentials in cookie!!!

There were two interesting points in this attack:
  • Attacker injected the malicious code in Base64 encoded format and referenced to decode function from within the file to make it browser understandable
  • Attacker infected very few application supported files ignoring all application files to get past the malware detection

The attack was identified and notified by a legitimate user, who knew something about security, noticing that Microsoft Essential was not installed on his system.

The Steps: Root Cause Analysis

Analyzing malware requires effort, time, skill and minimal application knowledge. Below are few mandatory questions that must be asked before you conduct RCA for a website:

  • How many entry points does your application have?
  • Do you have system and application logs?
  • Why do you suspect your website is infected?
  • How do you manage your website?

After gathering these answers, you will find a direction into which you need to look for. Ask website owner for web application files, application logs, system logs and firewall logs, if exists. Next step is to adapt the approach for analysis:

  • Identify and block hacker access on your server
  • Backup old infected code
  • Identify the activity/action of malware like installing backdoor, stealing session cookie, fake redirects, etc
  • Replicate the malware attack at your end to verify the malware behavior
  • Analyze every application file for malicious injected script. Code file size may give you a hint of which files are infected. In my case, every infected file code size was increased by 2 KB as compared to original file
  • Remove malicious script from all infected files
  • Scan your website folder with anti-virus and malware detection tool
  • Audit your server with Autoruns tool - Sysinternals
  • Make your cleaned application files go-live ensuring website is functioning perfectly as before. Look for every page load, image, buttons that must be in place
  • Scan your website with McAfee SiteAdvisor to ensure no malware exists on the website
  • Issue a best practice guideline

Hackers are always on the run with our own evolving technology.  Be Aware to Be Safe.

Tuesday, August 23, 2011

Automating Nessus Capabilities


In the process of automating network scans for large networks there is a necessity to automate Nessus scans as well. The major advantage and most important point of this automation is that it allows you to do a Schedule scan in Home Feed version (which is only available in Pro feed) and the easiest part is your scans would run as if you are running from your Nessus web interface client.

Below Nessus automation perl script takes the first policy defined in your Nessus web browser client to run the scans. The script is based on my previous concept of Automating NMAP:

use Net::Nessus::XMLRPC;
$file = "ipadr.txt";
my $n = Net::Nessus::XMLRPC->new ('','admin','admin');  #Enter nessus username and password
die "Cannot login to: ".$n->nurl."\n" unless ($n->logged_in);
print "Logged in\n";
my $polid=$n->policy_get_first;
print "Using policy ID: $polid ";
my $polname=$n->policy_get_name($polid);
print "with name: $polname\n";
my $targets;
my $scanid=$n->scan_new_file($polid,"report",$targets,$file);
print "Performing scan on:\t$scanid\n";
while (not $n->scan_finished($scanid))
{
print "$scanid: ".$n->scan_status($scanid)."\n";
sleep 15;
}
print "$scanid: ".$n->scan_status($scanid)."\n";
my $reportcont=$n->report_file_download($scanid);
my $reportfile="report.html";
open (FILE,">$reportfile") or die "Cannot open file $reportfile: $!";
print FILE $reportcont;
close (FILE);

How to Run:

1.       Install perl and Net::Nessus::XMLRPC module
2.       Create a file named “ipadr.txt” and dump your entire IP list here; one entry on each line. For ex:
       
       10.0.0.1
       10.0.0.2
       10.0.0.3

3     Copy the above script in a textpad and save as nessus.pl
4.       Place nessus.pl and ipadr.txt in same folder. Ex: C:\Auto_Nessus
5.       Go to command prompt and browse till C:\Auto_Nessus.
6.       Fire command:

       perl nessus.pl

7.       The report will be saved in same folder as report.html. Alternatively, you can login to Nessus web client and view your report from there as well.

The next task is to make this script to work with the Windows Scheduler. Copy the below code in a textpad and save it as “Nessus_scan.bat”:

@ECHO OFF
REM cd to folder location
cd C:\Auto_Nessus
perl nessus.pl

Open your windows scheduler and schedule the batch file to execute at your desired time.

Happy Scanning!!!

Monday, August 8, 2011

Automating NMAP Capabilities


Many times I have encountered a problem with projects where large scanning of network host is required. In that case, you simply cannot expect your consultant to scan each host individually, analyze output and list down all vulnerable ports/services. Yes..we can even detect open ports with Nessus but still it has a host limitation per scan.

I thought to automate this process to get a list of open port for each host and dump the output in a single file. You just need to have perl installed on your machine to see how this works. Here’s a perl script to automate 
NMAP scan:

open FH, "ipadr.txt" or die $!;
my $line = <FH>;
my $s1="-sS -sV -P0"; #place your nmap scan command here
my $ip;
$file = ipadr.txt;
open (file) or die $!;
foreach $line (<file>)
{
chomp($line);
$ip=$line
$str="nmap $s1 $ip";
print "IP is:\t$str\n";
system($str);
}
close FH or die $!;

How to Use:

1.       Install perl from here
2.       Create a file named “ipadr.txt” and dump your entire IP list here; one entry on each line. For ex:

10.0.0.1
10.0.0.2
10.0.0.3

3.       Copy the above script in a textpad and save as nmap.pl
4.       Place nmap.pl and ipadr.txt in same folder. Ex: C:\Auto_NMAP
5.       Go to command prompt and browse till C:\Auto_NMAP.
6.       Fire command:

perl nmap.pl>output.txt

7.      Output.txt file will be created in the same folder and your entire nmap results will be dumped here.

Furthermore, you can use this script to run any NMAP command and get the output dumped in a single file. For running other commands you just need to edit below line:

my $s1="-sS -sV -P0";

Happy Scanning!!!

Thursday, July 21, 2011

Mantra Security Framework – 0.6.1 Released


Being a security professional, it is always good to have portable, ready-to-run and compact tool for security testing. Mantra Security Toolkit offers a Browser based collection of free and open source tools for quick security testing. It embeds all the five phases of hack including reconnaissance, scanning and enumeration, gaining access, escalation of privileges, maintaining access, and covering tracks. It also embeds advanced attacking tools as a plug-in such as XSS Me, SQL Inject Me, Access Me.


The tool has a user friendly graphical interface and is intended to be lite, flexible and portable. You can carry it in memory cards, flash drives, CD/DVDs, etc. It can be run natively on Linux, Windows and Mac platforms.

Mantra follows the structure and guidelines of FireCAT which makes it more accessible. The set of tools which makes attacker’s task easier offered by Mantra are:

+Information Gathering
+Whois
-Flagfox
+Location Info
-Flagfox
+Enumeration and Fingerprint
-Host Spy
-JSView
-PassiveRecon
-View Dependencies
-Wappalyzer
+Data Mining
-People Search Engine
-Facebook search
+Editors
-Cert Viewer Plus
-Firebug
-JSView
+Network Utilities
+Protocols and applications
+FTP
-Fire FTP
+DNS
-DNS Cache
+SQL
-SQLite Manager
+Sniffers
-HTTP Fox
+Password
-CryptoFox 2.0
+Misc
+Tweaks and Hacks
-Greasemonkey
+Scripts
-Greasefir
+Malware scanner
-Web of Trust
+Automation
-iMacros
+Others
-CacheToggle 0.6
-URL Flipper
+Application Auditing
-Hackbar
-JavaScript Deobfuscator
-RESTClient
-Tamper Data
-Live HTTP Headers
-RefControl
-User Agent Switcher
-Web Developer
-DOM Inspector
-Inspect This
-Formfox
+Exploit Me
-Access Me
-SQL Inject Me
-XSS Me
+Cookies
-Cookies Manager+ 1.5.1
-Firecookie
+Proxy
-FoxyProxy Standard 2.22.6
-HttpFox

Installation: No installation is required. Just you need to close all firefox windows before you double click on .exe file.

Download all versions here.

Monday, July 18, 2011

Catching Back Doors through Code Reviews

Off late, code reviews have been gaining a lot of popularity. Organizations which till recently were content with a secure network and an occasional Penetration Test are now getting their application’s code reviewed before going live.
A code review, over and above what application penetration tests find, can uncover backdoors and Trojans in the code. These backdoors could have been introduced in the code intentionally or inadvertently.
Insecurities in most applications may arise due to a number of reasons. One important reason being the huge pressure on developers to meet the functional requirements and deliver on time. Some of the common mistakes developers may make are -
  1. Miss linking a page to other web pages
  2. Put some test code and forget to delete it
  3. Misplace web pages in home directory which are actually meant for other application modules
  4. Some malicious developers may intentionally plant a backdoor for future access

How do backdoors enter the application?

Consider a web based application built in ASP.NET. The application has strict authentication and authorization controls. A secure session management scheme has been implemented.
But unfortunately, one of the developers had unintentionally left some test pages in the application directory. The test page was written to execute a few database queries from the front-end; basically for “ease-of-use”. An attacker notices the test page while browsing the application and he quickly replaces web page name in the URL to the test page name, accesses the page and retrieves credit card information of customers. Thus, a small mistake in the development phase can result in theft of confidential information.
The existence of a backdoor can allow attackers to inject, view, modify or delete database/web pages without authorization. In some cases, it may also penetrate into the system and execute system commands.
The key characteristics of backdoors are:
  1. Orphaned web pages
  2. Left over Debug code
  3. Invisible Parameters
  4. Unnecessary web pages
  5. Usage of DDL statements
  6. Usage of Deletes/Updates

Techniques to detect backdoors through code review

Let’s see how we look for backdoors using each of the above mentioned characteristics.

Orphaned web pages

Look for all web pages that are not linked or called from any other web page; probably used for testing and not removed. This can be detected by analyzing page header directives to check for a page call.
The task can be made easier by writing a perl script that will search for links through out the application which are not linked to any other web page. Another way could be to write a perl script for a string search that will search for a particular web page name, say test.aspx, through out the application directory. The script displays every line that contains test.aspx from the application code. This method requires manual analyzing of the source code.

Left over Debug code

Look for all web pages where the session object is assigned a value from user input. Session object variables are used to hold information about one single user, and are available to all web pages across the application. So, if a session object is assigned a value on one page, the same session object can be used for making a decision or to make a SQL query on another page. Let’s say, the session object was used to test role based access feature in an application. The developer later decides to use classic ASP style coding and forgets to delete the code. An attacker notices this and changes the session object value to gain higher privileged access to the application. This causes authorization bypass or privilege escalation. If session objects are assigned a value from user input and are used as logic for authorization, then it’s a vulnerability.

Invisible Parameters

Identify all web pages for GET or POST parameters parsed by a web page. Look for those parameters that do not have any server side related code.
The task can be simplified by writing a perl script which will extract input parameters from web pages, store them into an array and compare the two to find parameters that only appear in server side code.

Unnecessary web pages

Look for web pages which are not linked to current working directory of the application. There may exist pages which are just placed into the application folder but are being called from other application modules.

Usage of DDL statements

Look for DDL statements in all web pages for operations like delete, drop, alter or create. These operations must not be handled from code behind; instead should be handled from a stored procedure.

Usage of DELETE/UPDATE

In all web pages, look for DELETE and UPDATE statements without a WHERE clause or WHERE conditions that always evaluate to True.

Best Practices

Here are some best practices that a developer must keep in mind while developing an application.
  1. Identify and remove all web pages that are not linked to any other application web pages
  2. Identify and remove GET/POST parameters that are not used by the application
  3. Segregate web pages accordingly. It is best to have critical application modules hosted on separate servers
  4. Do not assign value from user input to global variables
  5. Always use stored procedures for DDL operations

.NET Inherent Protection against CSRF

CSRF Protection in .NET – ViewStateUserKey

Cross Site Request Forgery is one of the most happening attacks over the internet today. The attackers find it easy to exploit as it does not require any authentication information, session cookies but only require the user to be authenticated to the application. And this works on every platform. It doesn’t matter what authentication type application uses, windows or forms authentication. Let’s assume attacker hosts a page on some X server which executes critical application functionality; attacker having knowledge of the application tricks the victim to visit the page by phishing attack, email abusing, redirection flaw, etc. The action executes in the background without the victim knowledge using user’s logged in session state. 

.NET 3.5 framework has a built-in functionality to prevent CSRF attack. The framework has ViewStateUserKey property under system.web assembly which partially fixes the issue. This, however, does not make the application completely safe against CSRF attack if used alone. This is just a part of defense-in-depth concept.

ViewStateUserKey assigns a random value to current visiting page for an individual user. The random value is assigned in a ViewState parameter; which must be enabled in an application. This random value can be your authentication cookie, session ID or any random token. A good approach is to set two variables on authentication i.e. first in viewstate, second at server side and then subsequently drop the first variable from viewstate after successful authentication. Now, before serving the critical functionality page to the client set first variable in request and compare the same with server side variable in response. If matched, process the request otherwise drop the request. Make sure that the first variable is hidden.

It is recommended to use viewstate encryption mode along with ViewStateUserKey method.

More solutions to CSRF:

  • Check the Refferer field for the domain along with above mentioned implementation.
  • Add a unique and random page token to every form. This token must change on every form submission for example token X is for user addition, token X changes to Y for user updation, and so on.