Showing posts with label ios. Show all posts
Showing posts with label ios. Show all posts

Tuesday, June 20, 2017

How Can PVS-Studio Help in the Detection of Vulnerabilities?

By Sergey Vasiliev

A vulnerability in terms of computer security, is a flaw in the system allowing someone to violate the integrity, or deliberately cause a malfunction, of the program. Practice shows that even a seemingly insignificant bug can be a serious vulnerability. Vulnerabilities can be avoided by using different methods of validation and verification of software, including static analysis. This article will cover the topic of how PVS-Studio copes with the task of vulnerability search.
Picture 5

PVS-Studio is a Tool that Prevents not Only Bugs, but also Vulnerabilities

Later in the article I will tell how we came to this conclusion. But first, I would like to say a few words about PVS-Studio itself.

Picture 3
PVS-Studio is a static code analyzer that searches for bugs (and vulnerabilities) in programs written in C, C++, and C#. It works under Windows and Linux, and can be integrated into Visual Studio IDE as a plugin. At this point the analyzer has more than 450 diagnostic rules, each of them is described in the documentation.
By the time this article was posted, we had checked more than 280 open source projects, where we found more than 11 000 errors. It's quite interesting, the number of these bugs which are real vulnerabilities...
You can download PVS-Studio on the official site, and try it yourself.
By the way, we offer PVS-Studio licenses to security experts. If you are an expert in the field of security, and search for vulnerabilities, you may contact us to get a license. More details about this offer can be found in the article "Handing out PVS-Studio Analyzer Licenses to Security Experts".

Terminology

In the case that you are well aware of the terminology, and know the differences between CVE and CWE as well as their similarities, you may skip this section. Still, I suggest that everybody else to take a look at it, so it will be easier to understand the topic in the future.
CWE (Common Weakness Enumeration) - a combined list of security defects. Targeted at both the development community and the community of security practitioners, Common Weakness Enumeration (CWE) is a formal list or dictionary of common software weaknesses that can occur in software's architecture, design, code, or implementation that can lead to exploitable security vulnerabilities. CWE was created to serve as a common language for describing software security weaknesses; as a standard measuring stick for software security tools targeting these weaknesses; and to provide a common baseline standard for weakness identification, mitigation, and prevention efforts.
CVE (Common Vulnerabilities and Exposures) - program errors that can be directly used by hackers.
MITRE corporation started working on the classification of software vulnerabilities in 1999, when the list of common vulnerabilities and the software liabilities (CVE) came into being. In 2005 within the framework of further development of the CVE system, a team of authors started the work on the preparatory classification of vulnerabilities, attacks, crashes and other kinds of security issues with a view to define common software security defects. However, despite the self-sufficiency of the classification created in the scope of CVE, it appeared to be too rough for the definition and classification of methods of code security assessment, used by the analyzers. Thus, CWE list was created to resolve this problem.

PVS-Studio: A Different Point of View

Background

Historically, we have positioned PVS-Studio as a tool to search for errors. In the articles about our project analyses, we have always used corresponding terminology: a bug, an error, a typo. It's clear that different errors have different levels of severity: there may be some code fragments that contain redundant or misleading code, but there are some errors that cause the whole system to crash at 5 in the morning every third day. Everything was clear, this concept didn't go any further for a long time - errors were just errors.
However, over time, it turned out that some of the errors detected by PVS-Studio can be more serious. For example, incorrectly used printf function can cause many more negative consequences than the output of a wrong message in stdout. When it became clear that quite a number of diagnostic rules can detect not only errors, but weaknesses (CWE), we decided to investigate this question in more detail and see how the diagnostic rules of PVS-Studio can be related to CWE.

The relation between PVS-Studio and CWE

Based on the results of detecting the correlation between the warnings of PVS-Studio and CWE we created the following table:
CWEPVS-StudioCWE Description
CWE-14V597Compiler Removal of Code to Clear Buffers
CWE-36V631, V3039Absolute Path Traversal
CWE-121V755Stack-based Buffer Overflow
CWE-122V755Heap-based Buffer Overflow
CWE-123V575Write-what-where Condition
CWE-129V557, V781, V3106Improper Validation of Array Index
CWE-190V636Integer Overflow or Wraparound
CWE-193V645Off-by-one Error
CWE-252V522, V575Unchecked Return Value
CWE-253V544, V545, V676, V716, V721, V724Incorrect Check of Function Return Value
CWE-390V565Detection of Error Condition Without Action
CWE-476V522, V595, V664, V757, V769, V3019, V3042, V3080, V3095, V3105, V3125NULL Pointer Dereference
CWE-481V559, V3055Assigning instead of comparing
CWE-482V607Comparing instead of Assigning
CWE-587V566Assignment of a Fixed Address to a Pointer
CWE-369V609, V3064Divide By Zero
CWE-416V723, V774Use after free
CWE-467V511, V512, V568Use of sizeof() on a Pointer Type
CWE-805V512, V594, V3106Buffer Access with Incorrect Length Value
CWE-806V512Buffer Access Using Size of Source Buffer
CWE-483V640, V3043Incorrect Block Delimitation
CWE-134V576, V618, V3025Use of Externally-Controlled Format String
CWE-135V518, V635Incorrect Calculation of Multi-Byte String Length
CWE-462V766, V3058Duplicate Key in Associative List (Alist)
CWE-401V701, V773Improper Release of Memory Before Removing Last Reference ('Memory Leak')
CWE-468V613, V620, V643Incorrect Pointer Scaling
CWE-588V641Attempt to Access Child of a Non-structure Pointer
CWE-843V641Access of Resource Using Incompatible Type ('Type Confusion')
CWE-131V512, V514, V531, V568, V620, V627, V635, V641, V645, V651, V687, V706, V727Incorrect Calculation of Buffer Size
CWE-195V569Signed to Unsigned Conversion Error
CWE-197V642Numeric Truncation Error
CWE-762V611, V780Mismatched Memory Management Routines
CWE-478V577, V719, V622, V3002Missing Default Case in Switch Statement
CWE-415V586Double Free
CWE-188V557, V3106Reliance on Data/Memory Layout
CWE-562V558Return of Stack Variable Address
CWE-690V522, V3080Unchecked Return Value to NULL Pointer Dereference
CWE-457V573, V614, V730, V670, V3070, V3128Use of Uninitialized Variable
CWE-404V611, V773Improper Resource Shutdown or Release
CWE-563V519, V603, V751, V763, V3061, V3065, V3077, V3117Assignment to Variable without Use ('Unused Variable')
CWE-561V551, V695, V734, V776, V779, V3021Dead Code
CWE-570V501, V547, V517, V560, V625, V654, V3022, V3063Expression is Always False
CWE-571V501, V547, V560, V617, V654, V694, V768, V3022, V3063Expression is Always True
CWE-670V696Always-Incorrect Control Flow Implementation
CWE-674V3110Uncontrolled Recursion
CWE-681V601Incorrect Conversion between Numeric Types
CWE-688V549Function Call With Incorrect Variable or Reference as Argument
CWE-697V556, V668Insufficient Comparison
Table N1 - The first test variant of the correspondence between CWE and PVS-Studio diagnostics
The above is not the final variant of the table, but it gives some idea of how some of the PVS-Studio warnings are related to CWE. Now it is clear that PVS-Studio successfully detects (and has always detected) not only bugs in the code of the program, but also potential vulnerabilities, i.e. CWE. There were several articles written on this topic, they are listed in the end of this article.

CVE Bases

Picture 2
A potential vulnerability (CWE) is not yet an actual vulnerability (CVE). Real vulnerabilities, found both in open source, and in proprietary projects, are collected on the http://cve.mitre.org site. There you may find a description of a particular vulnerability, additional links (discussions, a bulletin of vulnerability fixes, links to the commits, remediate vulnerabilities and so on.) Optionally, the database can be downloaded in the necessary format. At the time of writing this article, .txt file of the base of vulnerabilities was about 100MB and more than 2.7 million of lines. Quite impressive, yes?
Picture 6
While doing some research for this article, I found quite an interesting resource that could be helpful to those who are interested - http://www.cvedetails.com/. It is convenient due to such features as:
  • Search of CVE by the CWE identifier;
  • Search of CVE in a certain product;
  • Viewing statistics of appearance/fixes of vulnerabilities;
  • Viewing various data tables, in one or another way related to CVE (for example, rating of companies, in whose products was the largest number of vulnerabilities found);
  • And with more besides.

Some CVE that Could Have Been Found Using PVS-Studio

I am writing this article to demonstrate that the PVS-Studio analyzer can protect an application from vulnerabilities (at least, from some of them).
We have never investigated whether a certain defect, detected by PVS-Studio, can be exploited as a vulnerability. This is quite complicated and we have never set such a task. Therefore, I will do otherwise: I'll take several vulnerabilities that were have already detected and described, and show that they could have been avoided, if the code had been regularly checked by PVS-Studio.
Note. The vulnerabilities described in the article weren't found in synthetic examples, but in real source files, taken from old project revisions.

illumos-gate

Picture 7
The first vulnerability that we are going to talk about was detected in the source code of the illumos-gate project. illumos-gate is an open source project (available at the repository of GitHub), forming the core of an operating system, rooted in Unix in BSD.
The vulnerability has a name CVE-2014-9491.
Description of CVE-2014-9491: The devzvol_readdir function in illumos does not check the return value of a strchr call, which allows remote attackers to cause a denial of service (NULL pointer dereference and panic) via unspecified vectors.
The problem code was in the function devzvol_readdir:
static int devzvol_readdir(....)
{
  ....
  char *ptr;
  ....
  ptr = strchr(ptr + 1, '/') + 1;
  rw_exit(&sdvp->sdev_contents);
  sdev_iter_datasets(dvp, ZFS_IOC_DATASET_LIST_NEXT, ptr);
  ....
}
The function strchr returns a pointer to the first symbol occurrence, passed as a second argument. However, the function can return a null pointer in case the symbol wasn't found in the source string. But this fact was forgotten, or not taken into account. As a result, the return value is just added 1, the result is written to the ptr variable, and then the pointer is handled "as is". If the obtained pointer was null, then by adding 1 to it, we will get an invalid pointer, whose verification against NULL won't mean its validity. Under certain conditions this code can lead to a kernel panic.
PVS-Studio detects this vulnerability with the diagnostic rule V769, saying that the pointer returned by the strchr function can be null, and at the same time it gets damaged (due to adding 1):
V769 The 'strchr(ptr + 1, '/')' pointer in the 'strchr(ptr + 1, '/') + 1' expression could be nullptr. In such case, the resulting value will be senseless and it should not be used.

Network Audio System

Network Audio System (NAS) - network-transparent, client-server audio transport system, whose source code is available on SourceForge. NAS works on Unix and Microsoft Windows.
The vulnerability detected in this project has the code name CVE-2013-4258.
Description of CVE-2013-4258: Format string vulnerability in the osLogMsg function in server/os/aulog.c in Network Audio System (NAS) 1.9.3 allows remote attackers to cause a denial of service (crash) and possibly execute arbitrary code via format string specifiers in unspecified vectors, related to syslog.
The code was the following:
....
if (NasConfig.DoDaemon) {   /* daemons use syslog */
  openlog("nas", LOG_PID, LOG_DAEMON);
  syslog(LOG_DEBUG, buf);
  closelog();
} else {
  errfd = stderr;
....
In this fragment a syslog function is used incorrectly. Function declaration looks as follows:
void syslog(int priority, const char *format, ...);
The second parameter should be a format string, and all the others - data required for this string. Here the format string is missing, and a target message is passed directly as an argument (variable buf). This was the cause of the vulnerability which may lead to execution of arbitrary code.
If we believe the records in the SecurityFocus base, the vulnerability showed up in Debian and Gentoo.
What about PVS-Studio then? PVS-Studio detects this error with the diagnostic rule V618 and issues a warning:
V618 It's dangerous to call the 'syslog' function in such a manner, as the line being passed could contain format specification. The example of the safe code: printf("%s", str);
The mechanism of function annotation, built in the analyzer, helps to detect errors of this kind; the amount of annotated functions is more than 6500 for C and C++, and more than 900 for C#.
Here is the correct call of this function, remediating this vulnerability:
syslog(LOG_DEBUG, "%s", buf);
It uses a format string of "%s", which makes the call of the syslog function safe.

Ytnef (Yerase's TNEF Stream Reader)

Ytnef - an open source program available on GitHub. It is designed to decode the TNEF streams, created in Outlook, for example.
Over the last several months, there were quite a number of vulnerabilities detected that are described here. Lets' consider one of the CVE given in this list - CVE-2017-6298.
Description of CVE-2017-6298: An issue was discovered in ytnef before 1.9.1. This is related to a patch described as "1 of 9. Null Pointer Deref / calloc return value not checked."
All the fixed fragments which could contain null pointer dereference were approximately the same:
vl->data = calloc(vl->size, sizeof(WORD));
temp_word = SwapWord((BYTE*)d, sizeof(WORD));
memcpy(vl->data, &temp_word, vl->size);
In all these cases the vulnerabilities are caused by incorrect use of the calloc function. This function can return a null pointer in case the program failed to allocate the requested memory block. But the resulting pointer is not tested for NULL, and is used on account that calloc will always return a non-null pointer. This is slightly unreasonable.
How does PVS-Studio detect vulnerabilities? Quite easily: the analyzer has a lot of diagnostic rules, which detect the work with null pointers.
In particular, the vulnerabilities described above would be detected by V575 diagnostic. Here is what the warning looks like:
V575 The potential null pointer is passed into 'memcpy' function. Inspect the first argument.
The analyzer detected that a potentially null pointer, resulting from the call of the calloc function, is passed to the memcpy function without the verification against NULL.
That's how PVS-Studio detected this vulnerability. If the analyzer was used regularly while writing code, this problem could be avoided before it got to the version control system.

MySQL

Picture 8
MySQL is an open-source relational database management system. Usually MySQL is used as a server accessed by local or remote clients; however, the distribution kit includes a library of internal server, allowing the building of MySQL into standalone programs.
Let's consider one of the vulnerabilities, detected in this project - CVE-2012-2122.
The description of CVE-2012-2122: sql/password.c in Oracle MySQL 5.1.x before 5.1.63, 5.5.x before 5.5.24, and 5.6.x before 5.6.6, and MariaDB 5.1.x before 5.1.62, 5.2.x before 5.2.12, 5.3.x before 5.3.6, and 5.5.x before 5.5.23, when running in certain environments with certain implementations of the memcmp function, allows remote attackers to bypass authentication by repeatedly authenticating with the same incorrect password, which eventually causes a token comparison to succeed due to an improperly-checked return value.
Here is the code, having a vulnerability:
typedef char my_bool;
my_bool
check_scramble(const char *scramble_arg, const char *message,
               const uint8 *hash_stage2)
{
  ....
  return memcmp(hash_stage2, hash_stage2_reassured, SHA1_HASH_SIZE);
}
The type of the return value of the memcmp function is int, and the type of the return value of the check_scramble is my_bool, but actually - char. As a result, there is implicit conversion of int to char, during which the significant bits are lost. This resulted in the fact that in 1 out of 256 cases, it was possible to login with any password, knowing the user's name. In view of the fact that 300 attempts of connection took less than a second, this protection is as good as no protection. You may find more details about this vulnerability via the links listed on the following page: CVE-2012-2122.
PVS-Studio detects this issue with the help of the diagnostic rule V642. The warning is the following:
V642 Saving the 'memcmp' function result inside the 'char' type variable is inappropriate. The significant bits could be lost breaking the program's logic. password.c
As you can see, it was possible to detect this vulnerability using PVS-Studio.

iOS

Picture 9
iOS - a mobile operating system for smartphones, tablets and portable players, developed and manufactured by Apple.
Let's consider one of the vulnerabilities that was detected in this operating system; CVE-2014-1266. Fortunately, the code fragment where we may see what the issue is about, is publicly available.
Description of the CVE-2014-1266 vulnerability: The SSLVerifySignedServerKeyExchange function in libsecurity_ssl/lib/sslKeyExchange.c in the Secure Transport feature in the Data Security component in Apple iOS 6.x before 6.1.6 and 7.x before 7.0.6, Apple TV 6.x before 6.0.2, and Apple OS X 10.9.x before 10.9.2 does not check the signature in a TLS Server Key Exchange message, which allows man-in-the-middle attackers to spoof SSL servers by (1) using an arbitrary private key for the signing step or (2) omitting the signing step.
The code fragment causing the vulnerability was as follows:
static OSStatus
SSLVerifySignedServerKeyExchange(SSLContext *ctx, 
                                 bool isRsa, 
                                 SSLBuffer signedParams,
                                 uint8_t *signature, 
                                 UInt16 signatureLen)
{
  OSStatus err;
  ....

  if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
    goto fail;
  if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
    goto fail;
    goto fail;
  if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
    goto fail;
  ....

fail:
  SSLFreeBuffer(&signedHashes);
  SSLFreeBuffer(&hashCtx);
  return err;
}
The problem is in two goto operators, written close to each other. The first refers to the if statement, while the second - doesn't. Thus, regardless of the values of previous conditions, the control flow will jump to the "fail" label. Because of the second goto operator, the value err will be successful. This allowed man-in-the-middle attackers to spoof SSL servers.
PVS-Studio detects this issue using two diagnostic rules - V640 and V779. These are the warnings:
  • V640 The code's operational logic does not correspond with its formatting. The statement is indented to the right, but it is always executed. It is possible that curly brackets are missing.
  • V779 Unreachable code detected. It is possible that an error is present
Thus, the analyzer warns about several things that seemed suspicious to it.
  • the logic of the program does not comply with the code formatting: judging by the alignment, we get the impression that both goto statements refer to the if statement, but it isn't so. The first goto is really in the condition, but the second - not.
  • unreachable code: as the second goto runs without a condition, the code following it won't get executed.
It turns out that here PVS-Studio also coped with the work successfully.

Effective Use of Static Analysis

The aim of this article, as I mentioned earlier, is to show that the PVS-Studio analyzer successfully detects vulnerabilities. The approach chosen to achieve this objective is the demonstration that the analyzer finds some well-known vulnerabilities. The material was necessary to confirm the fact that it is possible to search for vulnerabilities using static analysis.
Now I would like to speak about the ways to do it more effectively. Ideally, vulnerabilities should be detected before they turn into vulnerabilities (i.e. when someone finds them and understands how they can be exploited); the earlier they are found, the better. By using static analysis in the proper way, the vulnerabilities can be detected at the coding stage. Below is the description of how this can be achieved.
Note. In this section I am going to use the word "error" for consistency. But, as we have already seen, simple bugs can be potential - and then real - vulnerabilities. Please do not forget this.
In general, the earlier the error is found and fixed, the lower the cost of fixing it. The figure provides data from the book by Capers Jones "Applied Software Measurement".
Picture 1
As you can see on the graphs, approximately 85% of errors are made at the coding stage, when the cost of the fix is minimal. As the error continues living in the code, the cost of its fix is constantly rising; if it costs only 25$ to fix the error at the coding stage, then after the release of the software, this figure increases up to tens of thousands dollars. Not to mention the cost of the vulnerabilities, found after the release.
It follows a simple conclusion - the sooner the error is detected and fixed, the better. The aim of static analysis is the earliest possible detection of errors in the code. Static analysis is not the replacement of other validation and verification tools, but a great addition to them.
How to get most of the benefit from a static analyzer? The first rule - the code must be checked regularly. Ideally, the error should be fixed at the coding stage, before it is committed to the version control system.
Nevertheless, it can be quite inconvenient to run continuous checks on the developer's machine. Besides that, the analysis of the code can be quite long, which won't let you recheck the code after the fixes. PVS-Studio has a special incremental analysis mode implemented, which allows analysis of only the code which was modified/edited since the last build. Moreover, this feature allows the running of the analysis automatically after the build, which means the developer doesn't have to think about manually starting it. After the analysis is completed, the programmer will be notified if there were errors detected in the modified files.
But even using the analyzer in such a way, there is a chance of an error getting into the version control system. That's why it's important to have a 'second level of protection' - to use a static analyzer on the build server. For example, to integrate the code analysis to the process of night builds. This will allow the checking of projects at night, and in the morning collecting information on the errors that got into the version control system. An important thing here is to immediately fix errors detected this way - preferably the next day. Otherwise, over time, nobody will pay attention to the new errors and there will be little use in such checks.
Implementation of static analysis into the development process may seem a non-trivial task, if the project is not being developed from scratch. The article, "What is a quick way to integrate static analysis in a big project?" gives a clear explanation of how to start using static analysis correctly.

Conclusion

I hope I was able to show that:
  • even a seemingly simple bug may be a serious vulnerability;
  • PVS-Studio successfully copes not only with the detection of errors in the code, but with CWE and CVE as well.
And if the cost of a simple bug increases over the time, the cost of a vulnerability can be enormous. At the same time, with the help of static analysis, a lot of vulnerabilities can be fixed even before they get into the version control system; not to mention before someone finds them and starts exploiting them.
Lastly, I would like to recommend trying PVS-Studio on your project - what if you find something that would save your project from getting to the CVE base?

Additional Links

Friday, December 23, 2016

Stories about Christmas and New Year Bugs

Do you believe in magic? Of course not - it's just against logic! Programmers are serious-minded and well-educated people of a realistic outlook. Well, you didn't favor fairy tales as a child either, did you? OK, I'm not going to answer for you. Just please make yourself a cup of tea, peel a tangerine, look at the snowflakes falling outside the window, and only then go on to read this Story.
What you are about to read is a story about an Evil Bug and its multiple attempts to spoil Christmas Eve and New Year's Eve. It did manage to fulfill its sinister plans a number of times, but, fortunately, in every true fairy tale, evil is always opposed by good.
Picture 3

Christmas-tree virus

On December 17, 1987, a student at the Clausthal University of Technology, former West Germany, a beginner programmer at the time, had a bright idea of an ingenious Christmas greeting for his friends. He sent them a Christmas tree! Of course, he hadn't cut it down in a forest, nor had he even bought it in a store. He was a programmer, remember? He just wrote a program in the scripting language REXX for VM/CMS that would draw a nice Christmas tree on the screen and print a few warm words.
Figure 1 - Christmas Tree worm
Figure 1 - Christmas Tree worm
The hero of our story surely meant well, but Evil Bug interfered, overloading the network and exploiting the self-replicating Christmas program to paralyze private email network IBM Vnet all over the world for two days (the chain was this: university network - EARN - BitNet - IBM Vnet). The hero was suspected to be an anti-hero and his touching greeting, a worm. The programmer's malicious intent was never proved, but Evil Bug was surely involved in that story.

Unprecedented-generosity show

People traditionally exchange presents on Christmas Eve and New Year's Eve. Beautifully packed boxes under the Christmas tree or cute souvenirs in Christmas stockings hung by the fireplace - this is what traditional Christmas and New Year presents look like. However, surprises are particularly pleasant.
Amazon was one of the first Internet services with dozens of thousands of goods of all kinds sold and bought daily. A perfect place to pick presents! That's what site visitors were doing on December 12, 2014. Huge excitement was caused by the fact that thousands of goods were selling for the wonderful price of just 1 penny (source). Infinitely grateful to Amazon for such a gorgeous Christmas present, the buyers were enthusiastically filling their carts. Meanwhile, Evil Bug was watching and smirking, anticipating the reaction of the sellers who knew nothing yet about the huge losses they had suffered.
The bug was hiding in RepricerExpress software responsible for synching prices in online stores. This software facilitates competition by enabling sellers to respond promptly to price fluctuations for like products.
What did Evil Bug do? It sneaked into RepricerExpress when it was only going through development and testing, but never showed up until... one of the sellers, caught in the pre-holiday turmoil, accidentally set a single price - 1 penny - for all of their stock. The software took that value as a minimum price and cut the prices for other sellers' like products accordingly.
That behavior had to do with the fact that when developing the UI, the software authors had not implemented a feature to allow sellers to specify individual minimum prices. More than that, the prices would automatically update within certain intervals. The bug was fixed in the subsequent versions of the software.
Figure 2 - Fixed UI (with newly added column Your Minimum Price)
Figure 2 - Fixed UI (with newly added column Your Minimum Price)
The day when the bug revealed itself will be remembered for long by the Amazon sellers. That day, they lost thousands of dollars and many nearly went bankrupt (source). But for the prompt action taken by Amazon, which managed to cancel the majority of orders placed on the affected items, the largest online store's reputation would have been severely damaged.
The RepricerExpress developers apologized for the bug in a statement posted on their official blog.

Apple VS New Year

Remember the film "How the Grinch Stole Christmas"? It seems that the Evil Bug took it as a source of inspiration when thinking up a plan of attacking Apple devices. In February 2016, Apple users discovered an interesting bug. There was a legend going around on social networks saying that if you changed the date to January 1, 1970, on your iPhone or iPad and restarted it, the system would completely crash leaving you with a brick with an Apple logo on it. The procedure was claimed to be irreversible. The bug was reported to be found on devices that employed 64-bit processors, such as Apple A7, A8, A8X, A9, and A9X: iPhone 5S and newer, iPad Air and iPad Mini 2 and newer, and the 6th generation iPod Touch. The operating system's version number did not matter.
Picture 2
Did anyone try it? Sure! A wave of Apple-gadget killings swept through the world. Fortunately, some handymen found a way to bring the "bricks" back to life. Apple never revealed the official cause of the bug, but they did confirm it could occur when manually changing the date to May 1970 or earlier on an iOS device.
Users carried out their own investigation and came up with the following explanation: the bug could have been caused by a negative-value variable used to store time in UNIX format. How could the value become negative?
Version 1. Since time was stored in UNIX-format, timing would start with January 1, 1970, that is, this date was a zero value. When changing time zones, the value could decrement below zero.
Version 2. The bug was typical of 64-bit devices, so perhaps the 32-bit time mark was computed first and then, after changing time zones, would be cast to the pointer size, causing the most significant bits to be filled incorrectly and... Welcome to the XXII century!

Sleep long with iPhone

Long, long sleep not interrupted by an alarm clock - isn't it what most of us dream of? Though not Gasprom, iPhones do make their owners' dreams come true! All those who were planning to start the first day of 2013 fresh and early and set up an alarm on their devices to January 1, "happily" overslept. Evil Bug obviously meant to turn a huge number of users into "sleeping beauties", as the iPhone alarm clock wouldn't work until January 3.
Picture 7
Apple preferred to keep silent again. However, rumors about the possible cause of the bug spread anyway. Apple uses the ISO week date standard, which is widely used by finance companies, as it enables convenient fiscal year planning. What is special about this standard is that a new year is considered as such only starting with the week the first Thursday of the year falls on. The ISO week date calendar contains 52 or 53 weeks (364 or 371 days), so it turned out iPhones were still living in the previous year and stepped into the new one (2013) on January 7, when the first week of the year began.
There was also an alternative explanation, where Steve Jobs himself took on the role of Evil Bug. The Apple founder was allegedly fond of sleeping in, hence that "feature". It's just a joke of course, but the consequences of that seemingly harmless bug were far from funny: the people were late for work, failed to get to important meetings in time, and lost money (source).

Flight cancelled

The price of a software bug is the factor that developers should never ignore. Here is another Christmas bug story to support this statement.
On December 12, 2014, the UK's air traffic control center of National Air Traffic Services (NATS) was faced with a software glitch, which brought the work of some of the airports, including such heavily loaded giants as Heathrow, Gatwick, Stansted, Birmingham, Cardiff, and Glasgow, to a halt. The problem was aggravated by the time that Evil Bug chose for the attack. It was a Friday afternoon, Christmas Eve.
The fault persisted for a little longer than half an hour - 36 minutes - but the price of the error behind it was steep, as illustrated by the following figures, which Evil Bug can be proud of:
  • 92 flights cancelled
  • 170 flights suspended
  • 10 planes re-routed to alternate airports
  • 125,000 passengers experiencing inconvenience
  • 623 million pounds of losses suffered
A situation like that could not pass unnoticed. An investigation was carried out. In their final report, the Civil Aviation Authority (CAA) and NATS described a bug found in the software of the System Flight Server (SFS). The SFS is responsible for real-time delivery of data to the Controllers of workstations within the NATS management system. There are two identical SFSs: primary and secondary. Both compute the same data. When the primary SFS shuts down, the secondary one comes into operation. The system did provide for hardware faults but for some reason lacked any protection against software exceptions.
The maximum permitted number of operational workstations (i.e. terminals from which traffic control and monitoring are carried out) running at a time was 193 - well, in theory at least. In reality, the SFS's code checked for another value, 151. That's why when 153 workstations attempted to connect simultaneously, the system reset with a subsequent crash. It was found later that the "latent software fault" had been present since as early as 1990. It's a wonder that it hadn't shown earlier.

The Year 2000 and Year 2038 problems

The New Year of 2000 was one of the most anticipated ones. As some experts of all sorts believed, the turn of the millennium was definitely going to be accompanied by the Apocalypse or, which is no less terrifying, rise of the machines.
What arguments did they give for their fear of Terminators? Logic! The first computers were slow, so programmers, unwilling to waste precious performance on trifles, decided to use two digits to represent the year in dates. For example, March 23, 1991, was represented as 23.03.91. This notation is nice and normal to the eye. However, from computers' viewpoint, it's not that simple. The years 2000 and 1900 were encoded by the same pair of digits, 00, so when the New Year of 2000 began, their internal clock would be set back to the year 1900.
People could not help visualizing the dreadful effects of such a terrible fault: software crashes, spontaneous missile launches, the financial market collapse. The most horrible things were expected to happen in Russia as a country worst prepared for the new millennium.
Well, 2017 is approaching, which means the Apocalypse never happened.
That said, certain bugs did show when the new millennium came:
  • British Telecom's computer networks were paralyzed and engineers had to analyze about a million of code lines to bring them back to life. It cost British Telecom quite a big sum of about 0.5 billion dollars.
  • In Spain, emergency conditions were observed at 9 nuclear plants - fortunately, without any serious consequences.
  • In Mongolia, the "Year 2000 problem" affected railway operation and ticket offices.
Some of the bugs were quite amusing:
  • Terms of imprisonment in one Spanish prison were stretched/cut by 100 years
  • In some Greek stores, buyers would get sales slips dated 1900
  • In a South Korean hospital, the patient monitoring software declared a one-year-old baby an old man of 99
  • The citizens of a small US town got electricity bills overdue by 100 years
The "Year 2000 problem" is a striking example of the profound effect that mass media have on humankind. The next wave of mass panic for a similar reason is expected in 2038. On January 19, 2038, at 03:14:07, Greenwich, computers and other devices using 32-bit operating systems will no longer be able to measure time properly. In many devices, system time is measured in seconds starting with January 1, 1970. The seconds are stored in a 32-bit value of type signed int (32-bit signed integer). Soon after the beginning of 2038, the counter will update with the 2,147,483,648th second, which the system will not be able to store, and switch to a negative value.
How to avoid a system error that will follow? Replace all 32-bit processors with 64-bit ones.

How to help Good?

Traditionally, Good always defeats Evil, but the struggle doesn't stop for a moment. Is there any chance to exterminate all Evil Bugs for good? That's unlikely, but we definitely have every chance to deal massive damage to their troops. To do that, programmers fighting on Good's side, i.e. for quality code, should wisely pick tools to help them in the fight. Arm yourself with PVS-Studio static analyzer! And be sure to check this short horror film about Unicorn PVS-Studio saving Penguin Linux from Evil Bug.
Feel inspired? Then let's help Good together! The PVS-Studio team has already made a big step forward by offering you the free version of our analyzer.
Picture 6
Dear programmers, good luck with your projects, and may Good always win in your evil-bug stories! Merry Christmas and a Happy New Year!