Secure by design

Secure by design

Secure by design, in software engineering, means that the software has been designed from the ground up to be secure. Malicious practices are taken for granted and care is taken to minimise impact when a security vulnerability is discovered. For instance when dealing with user input, when the user has to type his or her name, and that name is then used elsewhere in the program, care must be taken that when a user enters a blank name, the program does not break.

Generally, designs that work well do not rely on being secret. It is not mandatory, but proper security usually means that everyone is allowed to know and understand the design, because it is secure. If people are prevented from looking at the design, then it is usually flawed (and they know it, so you can't have a look at it). However, that does not mean that an open approach is always secure. It's secure until someone outsmarts the designers. The real advantage is, however, that when not keeping a design secret, issues can be discovered faster and hence can be fixed faster. (See Linus's law.)

Also, it is very important that everything works with the least amount of privileges possible. For example a Web server that runs as the administrative user (root or admin) can have the privilege to remove files and users that do not belong to itself. Thus, a flaw in that program could put the entire system at risk. On the other hand, a Web server that runs inside an isolated environment and only has the privileges for required network and filesystem functions, cannot compromise the system it runs on unless the security around it is in itself also flawed.

A perfect authentication system for logins does not allow anyone to log in at all, because the user could be a threat to the system. However, some designs can never be perfect. Passwords, biometrics, and such are never perfect.

Contents

  • 1 Security by design in practice
  • 2 Server/client architectures
  • 3 See also
  • 4 External links

Security by design in practice

Many things, especially input, should be distrusted by a secure design. A fault-tolerant program could even distrust its own internals.

Two examples of insecure design are allowing buffer overflows and format string vulnerabilities. The following C program demonstrates these flaws:

int main()
{
char buffer[100];
printf("What is your name?\n");
gets(buffer);
printf("Hello, ");
printf(buffer);
printf("!\n");
return 0;
}

Because the gets function in the C standard library does not stop writing bytes into buffer until it reads a newline character or EOF, typing more than 99 characters at the prompt constitutes a buffer overflow. Allocating 100 characters for buffer with the assumption that almost any given name from a user is no longer then 99 characters isn't supported by preventing the user to actually type more than 99 characters. This can lead to arbitrary machine code execution.

The second flaw is that the program tries to print its input by passing it directly to the printf function. This function prints out its first argument, replacing conversion specifications (such as "%s", "%d", et cetera) sequentially with other arguments from its call stack as needed. Thus, if a malicious user entered "%d" instead of his name, the program would attempt to print out a non-existent integer value, and undefined behavior would occur.

A related mistake in Web programming is for an online script not to validate its parameters. For example, consider a script that fetches an article by taking a filename, which is then read by the script and parsed. Such a script might use the following hypothetical URL to retrieve an article about dog food:

http://www.example.net/cgi-bin/article.sh?name=dogfood.html

If the script has no input checking, instead trusting that the filename is always valid, a malicious user could forge a URL to retrieve configuration files from the webserver:

http://www.example.net/cgi-bin/article.sh?name=../../../../../etc/passwd

Depending on the script, this may expose the /etc/passwd file, which on Unix-like systems contains (among others) user IDs, their login names, home directory paths and shells. (See SQL injection for a similar attack.)

Server/client architectures

In server/client architectures, the program at the other side may not be an authorised client and the client's server may not be an authorised server. Even when they are, a man-in-the-middle attack could compromise communications.

No comments: