Thursday, July 21, 2011

Mantra Security Framework – 0.6.1 Released


Being a security professional, it is always good to have portable, ready-to-run and compact tool for security testing. Mantra Security Toolkit offers a Browser based collection of free and open source tools for quick security testing. It embeds all the five phases of hack including reconnaissance, scanning and enumeration, gaining access, escalation of privileges, maintaining access, and covering tracks. It also embeds advanced attacking tools as a plug-in such as XSS Me, SQL Inject Me, Access Me.


The tool has a user friendly graphical interface and is intended to be lite, flexible and portable. You can carry it in memory cards, flash drives, CD/DVDs, etc. It can be run natively on Linux, Windows and Mac platforms.

Mantra follows the structure and guidelines of FireCAT which makes it more accessible. The set of tools which makes attacker’s task easier offered by Mantra are:

+Information Gathering
+Whois
-Flagfox
+Location Info
-Flagfox
+Enumeration and Fingerprint
-Host Spy
-JSView
-PassiveRecon
-View Dependencies
-Wappalyzer
+Data Mining
-People Search Engine
-Facebook search
+Editors
-Cert Viewer Plus
-Firebug
-JSView
+Network Utilities
+Protocols and applications
+FTP
-Fire FTP
+DNS
-DNS Cache
+SQL
-SQLite Manager
+Sniffers
-HTTP Fox
+Password
-CryptoFox 2.0
+Misc
+Tweaks and Hacks
-Greasemonkey
+Scripts
-Greasefir
+Malware scanner
-Web of Trust
+Automation
-iMacros
+Others
-CacheToggle 0.6
-URL Flipper
+Application Auditing
-Hackbar
-JavaScript Deobfuscator
-RESTClient
-Tamper Data
-Live HTTP Headers
-RefControl
-User Agent Switcher
-Web Developer
-DOM Inspector
-Inspect This
-Formfox
+Exploit Me
-Access Me
-SQL Inject Me
-XSS Me
+Cookies
-Cookies Manager+ 1.5.1
-Firecookie
+Proxy
-FoxyProxy Standard 2.22.6
-HttpFox

Installation: No installation is required. Just you need to close all firefox windows before you double click on .exe file.

Download all versions here.

Monday, July 18, 2011

Catching Back Doors through Code Reviews

Off late, code reviews have been gaining a lot of popularity. Organizations which till recently were content with a secure network and an occasional Penetration Test are now getting their application’s code reviewed before going live.
A code review, over and above what application penetration tests find, can uncover backdoors and Trojans in the code. These backdoors could have been introduced in the code intentionally or inadvertently.
Insecurities in most applications may arise due to a number of reasons. One important reason being the huge pressure on developers to meet the functional requirements and deliver on time. Some of the common mistakes developers may make are -
  1. Miss linking a page to other web pages
  2. Put some test code and forget to delete it
  3. Misplace web pages in home directory which are actually meant for other application modules
  4. Some malicious developers may intentionally plant a backdoor for future access

How do backdoors enter the application?

Consider a web based application built in ASP.NET. The application has strict authentication and authorization controls. A secure session management scheme has been implemented.
But unfortunately, one of the developers had unintentionally left some test pages in the application directory. The test page was written to execute a few database queries from the front-end; basically for “ease-of-use”. An attacker notices the test page while browsing the application and he quickly replaces web page name in the URL to the test page name, accesses the page and retrieves credit card information of customers. Thus, a small mistake in the development phase can result in theft of confidential information.
The existence of a backdoor can allow attackers to inject, view, modify or delete database/web pages without authorization. In some cases, it may also penetrate into the system and execute system commands.
The key characteristics of backdoors are:
  1. Orphaned web pages
  2. Left over Debug code
  3. Invisible Parameters
  4. Unnecessary web pages
  5. Usage of DDL statements
  6. Usage of Deletes/Updates

Techniques to detect backdoors through code review

Let’s see how we look for backdoors using each of the above mentioned characteristics.

Orphaned web pages

Look for all web pages that are not linked or called from any other web page; probably used for testing and not removed. This can be detected by analyzing page header directives to check for a page call.
The task can be made easier by writing a perl script that will search for links through out the application which are not linked to any other web page. Another way could be to write a perl script for a string search that will search for a particular web page name, say test.aspx, through out the application directory. The script displays every line that contains test.aspx from the application code. This method requires manual analyzing of the source code.

Left over Debug code

Look for all web pages where the session object is assigned a value from user input. Session object variables are used to hold information about one single user, and are available to all web pages across the application. So, if a session object is assigned a value on one page, the same session object can be used for making a decision or to make a SQL query on another page. Let’s say, the session object was used to test role based access feature in an application. The developer later decides to use classic ASP style coding and forgets to delete the code. An attacker notices this and changes the session object value to gain higher privileged access to the application. This causes authorization bypass or privilege escalation. If session objects are assigned a value from user input and are used as logic for authorization, then it’s a vulnerability.

Invisible Parameters

Identify all web pages for GET or POST parameters parsed by a web page. Look for those parameters that do not have any server side related code.
The task can be simplified by writing a perl script which will extract input parameters from web pages, store them into an array and compare the two to find parameters that only appear in server side code.

Unnecessary web pages

Look for web pages which are not linked to current working directory of the application. There may exist pages which are just placed into the application folder but are being called from other application modules.

Usage of DDL statements

Look for DDL statements in all web pages for operations like delete, drop, alter or create. These operations must not be handled from code behind; instead should be handled from a stored procedure.

Usage of DELETE/UPDATE

In all web pages, look for DELETE and UPDATE statements without a WHERE clause or WHERE conditions that always evaluate to True.

Best Practices

Here are some best practices that a developer must keep in mind while developing an application.
  1. Identify and remove all web pages that are not linked to any other application web pages
  2. Identify and remove GET/POST parameters that are not used by the application
  3. Segregate web pages accordingly. It is best to have critical application modules hosted on separate servers
  4. Do not assign value from user input to global variables
  5. Always use stored procedures for DDL operations

.NET Inherent Protection against CSRF

CSRF Protection in .NET – ViewStateUserKey

Cross Site Request Forgery is one of the most happening attacks over the internet today. The attackers find it easy to exploit as it does not require any authentication information, session cookies but only require the user to be authenticated to the application. And this works on every platform. It doesn’t matter what authentication type application uses, windows or forms authentication. Let’s assume attacker hosts a page on some X server which executes critical application functionality; attacker having knowledge of the application tricks the victim to visit the page by phishing attack, email abusing, redirection flaw, etc. The action executes in the background without the victim knowledge using user’s logged in session state. 

.NET 3.5 framework has a built-in functionality to prevent CSRF attack. The framework has ViewStateUserKey property under system.web assembly which partially fixes the issue. This, however, does not make the application completely safe against CSRF attack if used alone. This is just a part of defense-in-depth concept.

ViewStateUserKey assigns a random value to current visiting page for an individual user. The random value is assigned in a ViewState parameter; which must be enabled in an application. This random value can be your authentication cookie, session ID or any random token. A good approach is to set two variables on authentication i.e. first in viewstate, second at server side and then subsequently drop the first variable from viewstate after successful authentication. Now, before serving the critical functionality page to the client set first variable in request and compare the same with server side variable in response. If matched, process the request otherwise drop the request. Make sure that the first variable is hidden.

It is recommended to use viewstate encryption mode along with ViewStateUserKey method.

More solutions to CSRF:

  • Check the Refferer field for the domain along with above mentioned implementation.
  • Add a unique and random page token to every form. This token must change on every form submission for example token X is for user addition, token X changes to Y for user updation, and so on.