When you think about technical Audit, what comes to you mind first? You can start with architecture, infrastructure, source code, security, performance or something else. But what really is required, what should be the direction?
These days, software industry is growing very rapidly. Day by day technical challenges are increasing to maintain a clean and quality source code. Today I am going to talk about the standard technical practices which should be check if you are going to do audit for any software.
Technical Audit is not about doing one thing, your audit team should have vast experience of development. Here I am going to explain the different areas to be covered in technical audit. Depending upon project nature, few can be exceptions but mostly remains the same.
I have done audits for small as well giant projects, most of these were in Java backend, AWS/Azure cloud, Android Apps, iOS apps, Roku Apps, Smart TV apps, web applications. From auditing different applications, i have realized that there are few standard technical guidelines which are always required, irrespective of project nature. Development language and frameworks may be different but development approach should be standardized.
Here, I am sharing my checklist which I mostly refer for audits. These are just guidelines, as scope of audit varies from project to project. So after reading these details, you should create your own checklist.
- 1. Coding Standards
- 1.1 Basic Coding Standards and Coding guidelines
- 1.2 Error Handling
- 1.3 No Suspicious Comments
- 1.4 Copyright and Confidentiality Statements
- 1.5 Design Patterns
- 1.6 Logging usage and Logs file rollback strategy
- 1.7 Usage of Constant/properties file over Hard coded text
- 1.8 No large commented sections
- 1.9 Basic Api/Frontend/database validations
- 2. Security
- 3. Static code analysis
- 4. Performance
- 5. Data Storage
- 6. Architecture
- 7. Tools and processes
Let’s discuss these areas in details:
1. Coding Standards
1.1 Basic Coding Standards and Coding guidelines
Without a standard to follow, each developer (and sometimes each file) will take on a standard of its own (or be just a random mash of whatever). A project should be defined with coding standards and guidelines, which every developer has to follow.
Coding standards can be followed at various levels e.g: Indentation, structure programming, resource grouping, Classes, Subroutines, Functions and Methods, method complexity, code re-usability, file names, variable names, use of braces, compiler warnings etc.
Coding guidelines are used to have uniform structure in source code. For example Line length, spacing, variable declarations, Inline comments, meaningful errors, reasonably sized functions/methods, number of lines per file, use of constants etc.
1.2 Error Handling
Error handling takes two forms: structured exception handling and functional error checking. Structured exception handling is always preferred as it is easier to cover 100% of code. On the other hand it is very hard to cover 100% of all errors in languages that do not have exceptions. Applications should always fail safe. If an application fails to an unknown state, it is likely that an attacker may be able to exploit this indeterminate state to access unauthorized functionality, or worse create, modify or destroy data.
Fail Safe
- Inspect the application’s fatal error handler.
- Does it fail safe? If so, how?
- Is the fatal error handler called frequently enough?
- What happens to in-flight transactions and ephemeral data?
Error Handling
- Does production code contain debug error handlers or messages?
- If the language is a scripting language without effective pre-processing or compilation, can the debug flag be turned on in the browser?
- Do the debug messages leak privacy related information, or information that may lead to further successful attack?
Exception handling
- Does the code use structured exception handlers (try {} catch {} etc) or function-based error handling?
- If the code uses function-based error handling, does it check every return value and handle the error appropriately?
1.3 No Suspicious Comments
The code should not contain comments that suggest the presence of bugs, incomplete functionality, or weaknesses. Many suspicious comments, such as BUG, HACK, FIXME, LATER, LATER2, TODO, in the code indicate missing security functionality and checking. Others indicate code problems that programmers should fix, such as hard-coded variables, error handling, not using stored procedures, and performance issues.
1.4 Copyright and Confidentiality Statements
If you are a software coder or manufacturer, your intellectual property is critical to your business. After all, your primary product is your original code. If that code is used or misappropriated by a competitor, your business will suffer. One relatively easy strategy for preventing this situation is using copyright notices. Clear copyright should be asserted by whoever will be the appropriate party to own copyright on this application.
1.5 Design Patterns
Use appropriate design pattern (if it helps), after completely understanding the problem and context. But if any design pattern is implemented, then software should follow and implement in right way.
1.6 Logging usage and Logs file rollback strategy
Need to verify which framework is implemented for logging, logging levels are correctly used, enough information present in logs for diagnosis and logs rollback strategy in place to avoid a single log file with GBs of data.
1.7 Usage of Constant/properties file over Hard coded text
This one of the trivial activities of reviewing code. Whenever you see Hard-coded values, first ask yourself if there is another way you could remove this static value and make it configurable? Then go ahead and make it better. If there isn’t a way to get ride of it, store it in a static variable (Normally this will be in a constants class/file you use across the project).
1.8 No large commented sections
There is no real excuse for commented code regardless of reason. If you are using the commented code for debugging purposes you can create a trace mechanism that is disabled in release mode or has tracing levels (always good to be able to trace in a release version) or you can simply use a debugger. Commented code is bad because when other people read the code – especially when you are stressed trying to fix a bug when the original author is away on vacation – is that it is very confusing to read the code
1.9 Basic Api/Frontend/database validations
Data should always be assumed to be bad until it’s been through some kind of validation process. Make no assumptions about the data you’re receiving from someone, somewhere will likely send you a request that will break something at some point.
Validation Levels
- Type – The validation restricts what value a property may take by type (date, integer, character etc.)
- Value – The validation restricts what value a particular property may take. For example, the agreed price for a contract must be between 5,000 and 10,000.
- Dependent Validation – The validation defines a dependency between two columns in the same row. For example, the end date of a contract must be after the start date.
- Uniqueness – The validation defines a dependency between different rows in the table. The most obvious example is a primary key. The primary key columns must always be unique within the table.
- References – A validation that involves more than one property/table. A foreign key constraint is an example of such a validation.
- Transition constraints. All the validations mentioned previously operate on a static states. A transition constraint defines how a value is allowed to change, typically implemented using workflow engines. For example, a status column may be changed from “in progress” to “finished” but not directly from “to do” to “finished”.
2. Security
2.1 Cross-Site Request Forgery (CSRF) prevention
Cross-site scripting flaws can be difficult to identify and remove from a web application. The best practice to search for flaws is to perform an intense code review and search for all places where user input through a HTTP request could possibly make its way into the HTML output.
Code reviewer needs to closely review that untrusted data is not transmitted in the same HTTP responses as HTML or JavaScript. When data is transmitted from the server to the client, untrusted data must be properly encoded and the HTTP response. Do not assume data from the server is safe. Best practice is to always check data.
2.2 Application security – Configuration Management
- Check HTTP methods supported and Cross Site Tracing (XST)
- Test for security HTTP headers (e.g. CSP, X-Frame-Options, HSTS)
- Test for policies (e.g. Flash, Silverlight, robots)
- Check for sensitive data in client-side code (e.g. API keys, credentials)
- Check for weak algorithms usage
2.3 Application security – Secure Transmission
- Check SSL version, algorithms, and key length
- Check for digital certificate validity (duration, signature, and CN)
- Check that credentials are only delivered over HTTPS
- Check that the login form is delivered over HTTPS
- Check that session tokens are only delivered over HTTPS
- Check if HTTP Strict Transport Security (HSTS) in use
- Test ability to forge requests
- Test web messaging (HTML5)
- Check CORS implementation (HTML5)
2.4 Data security
- Compliance – Corporate compliance and privacy awareness
- Information governance
- Record retention
- Security of data
- Encrypted Sensitive information
- Check for proper use of salting
2.5 Api/Ui with Critical functionality exposed without any security
This is again a very difficult task to perform, but very important to cover. No critical functionality should be exposed without security like modification of data, payments data, reporting etc.
2.6 Authentication layer
- Test password quality rules
- Test for out-of-channel notification of account lockouts and successful password changes
- Test for weak security question/answer
- Test for brute force protection
- Test for credentials transported over an encrypted channel
- Test for cache management on HTTP (eg Pragma, Expires, Max-age)
- Test for user-accessible authentication history
- Test for Access control layer
2.7 Login session/token expiry/validation
- Establish how session management is handled in the application (eg, tokens in cookies, token in URL)
- Check session tokens for cookie flags (httpOnly and secure)
- Check session cookie scope (path and domain)
- Check session cookie duration (expires and max-age)
- Check session termination after a maximum lifetime
- Check session termination after relative timeout
- Check session termination after logout
- Test to see if users can have multiple simultaneous sessions
- Test session cookies for randomness
- Confirm that new session tokens are issued on login, role change, and logout
- Test for consistent session management across applications with shared session management
- Test for session puzzling
3. Static code analysis
- No Violations like PMD rules etc (Sonar Cube report)
- No Method should be present with higher complexity than permission complexity
- Minimum Code Duplicate
- Unit test cases
- Code coverage
- Proper nesting
- Number of function calls
- Cyclomatic complexity
4. Performance
Every system has a limited capacity. And when amounts of data, users, queries, page views etc are growing – at some point the performance will start to get worse. It requires certain methods, skills and effort to grasp the capacity and throughput limits to keep the performance stable for growing systems.
- Average response time (for Api/Uis or UIs)
- Concurrent requests benchmark (Project specific)
- Any bottlenecks for performance, perform load testing
- Performance monitoring tools in place
- Are your software/hardware fully optimized to handle the load peaks?
5. Data Storage
- Database schema design efficiency
- Sensitive data stored in encrypted form in database like Passwords, credit card info etc
- Database Foreign key associations
- Database query optimization
- Database choice evaluation (like RDBMS used instead of nosql)
6. Architecture
- Application scalability should be taken care in architecture
- Feature wise layering to manage the application
- Database concurrency control strategy
- Design Patterns implementation analysis
- Code Quality checks at regular intervals or via pull requests
- Cache Strategy (browser cache, database cache, application cache)
- Centralized logging system for analysis
- Code Reusability should be maximum
- Code readability & management
- Microservices implementation as per scenarios
- Resources categorization for readability
- Database Connection pool management
- Application profiling for multiple environments
- No Unexposed/Unused configuration
7. Tools and processes
- Environments segregation from production
- Database segregation from production
- Release tagging/branching
- Code versioning
- Automatic Build/release with Jenkins
- Build integrated with test runs
- Running integration tests for every build
- Pull request generation against individual features
- Monitoring tool in place to raise alerts
- Logs Aggregation tool for cloud scalable applications.
Feel free to share your thoughts, if you think few other things should be included here.