As hinted at in “Desktop Witness” by Michael A. Caloyannides, we may well be mostly inadvertently funding the work of spies and hackers, by purchasing personal computing resources without investing reasonable efforts into securing and securely using such resources. So whenever budgeting for computer resources, also factor-in the efforts required to keep and use such equipment securely—you don’t want to be the worse-off because of computer resource purchases you made.
Reporting crime to the police doesn’t only potentially obtain for you, free and tailored computer security assistance from authorities. It also helps to prevent crime in general such that there may be fewer criminals around in the first place to attack your systems, perhaps simply because they have been warded off from committing any crime due to your prior involvement of the police.
Think in terms of gradual movement along a security-level continuum
Security isn’t simply a question of a Boolean secure-or-not-secure proposition. There is a continuum of security levels, from least secure to most secure. Instead of thinking in terms of simply needing to be secure (which is certainly something worthy of your aspirations), think in terms of progressively moving along the continuum, day-by-day, gradually getting more and more secure, on the whole. Such progressive movement makes sense in terms of balancing the risks of security compromise with the efforts required for becoming secure. Also, don’t be concerned about ‘over doing it’—it’s not an exact science, and it’s often better to do too much than to do too little. Also, don’t worry about mistakes too much; so long as they’re not too frequent, fail-safe mechanisms in your security, coupled with threats being probabilistic, will mean that they won’t matter that much.
A security principle exists, where entities seek security levels that are minimally-above-average security. The rationale behind such an approach is as follows. It is supposed that adversaries have in their arsenal of weapons, attacks that overcome average security levels, but not attacks that can do much more. They maximally target users such that the “hacking success to work done” ratio is roughly optimal, which generally means constructing attacks that can be reused as much as possible (to save on work done). Then for every extra unit of work done, the ratio diminishes, equating to less “bang for your buck” (for the hacker). So to defend against such adversaries, an entity may choose to implement security that is above average, but only so much so as to defend against such adversaries who look more for common security weaknesses amongst users as a whole, rather than being able to hack the most secure of systems (“Fort Knox” like systems).
It may seem counterintuitive to publish your security methods. However, doing so can actually increase your overall security (rather than decrease it).
Firstly, by publishing your methods, you can get useful feedback in respect of the weaknesses and strengths of your current security methods, and then adjust them accordingly.
Secondly, your activities are unlikely to be completely isolated; instead, they are likely to depend upon a number of relationships with other entities. From this perspective, improving the security of the wider community, can provide better security for yourself.
Thirdly and finally, the widespread adoption of better security methods, can make the aspect of hacking you more difficult than if only you adopt better security methods. Suppose you are a hacker and you have on your agenda to hack 100 entities. If just one entity improves their security, hacking your 100 entities perhaps isn't negatively impacted so much. However, if all 100 entities improve their security, then hacking any one entity is made even more difficult because the hacker has to bear in mind that they also have to allot sufficient resources (including time) also to hacking the 99 other entities—the hacker may become simply too stretched for time/resources.
User randomly selecting unit from off physical shelves
Getting a unit sent to you in the post is not necessarily the most secure way to obtain a unit of any particular product. In particular, obtaining units in such ways are vulnerable to man-in-the-middle (MITM) attacks.
Instead, it may well be more secure to buy a unit by first physically visiting a large physical store selected at random, that is geographically distant from the user’s address, and then when at the store to choose and purchase personally, a shrink-wrapped unit fresh from its factory (or other product source) from off the physical shelves, using random selection, where there are many other units of the same product on the shelves.
Security can be increased by purchasing very many units of the same product. Because you have so many units in your possession, it should be more difficult for an attacker to tamper with every single unit. One unit can then be chosen at random, and all other units can then be returned for either partial or full refunds.
Perhaps a good way to ensure a product isn't tampered with before use, is to obtain the product through multiple channels. For example, software can be downloaded, bought from a physical shop[1], and also bought from an online shop[2]. The three versions of what should be the same installation software, once obtained, can then be byte-for-byte compared with each other to see whether they are exact copies—if one isn't, then it likely
points to tampering having been conducted to one or more of them.
Discerning unit least likely to have been compromised
If obtaining hardware, instead of using random selection, a potentially better way to select which unit to keep (one that hopefully hasn't undergone any tampering) is as follows. First, purchase several units of the same hardware. Once several units have been obtained—let's say for sake of example, seven units—the user needs to figure out which of the units have the least likelihood of having undergone any tampering, ideally without damaging them so that excess units can be returned for refunds (hopefully full refunds). To help in this figuring out, the user can use one or more non-invasive and non-destructivemeasuring methods, some of which are documented in the next section entitled “Measuring physical properties for authentication”.
The measuring can be undertaken before opening the hopefully shrink-wrapped boxes that contain the units (to facilitate being able to return, later on, no-longer-needed units). The measurements that are most common amongst the units, determine the quality-control parameters. Then any one unit fitting those parameters can be selected. This selection process on the whole is better than pure random selection, as it is better able to sift out any tampered units that may be in the mix, by using a balance-of-probabilities approach—tampered units hopefully will have odd measurement readings. So for example, if 6 units have the weight 100g with only one unit having the weight 101g, you would choose not to keep the unit weighing 101g (the odd, and potentially tampered-with unit).
Since the measuring techniques involved hopefully won't damage the units, returning excess units for a full refund should hopefully not be a problem.
How do you confirm that a physical device in your possession is exactly the same as the device it ought to be, as it was manufactured? It could not only have undergone tampering, but it could in fact be an entirely different device that just has the appearance of the actual genuine device.
Measuring physical properties appears to be a good way to make sure you have the genuine device in your possession. Whilst an adversary may be able to replicate one of the physical properties, the convolution (or intersection you might say) of several (or maybe even just two) is likely quite hard to replicate in any kind of imitation or tampered-with device. Once we acknowledge this principle is valid, we now turn to what physical properties we can measure for the application of this principle.
In the case of devices incapable of being damaged by slow-moving water, to measure volume, we can simply measure the displacement of water by placing the device in water[3].
In the case of other devices, if the device is low cost, a user can purchase two units of the device, one to be measured using destructive measures, the other one to keep, assuming the security tests succeed. The one to be measured, can have its volume measured in the same way as just outlined. Doing so will destroy the device, but you will have the other device that you can keep. Once you have ordered the two units, you select the one to measure by random selection, which is very important. In the scenario that an adversary was able to replace just one of the units, they will likely have replaced both units (and not just one) with imitation or tampered-with units. So testing one of the units will likely apply to tests of the other unit[4].
The 'magnetic weight' of a device can be measured, as in the force of attraction between the device and a magnet of some specific number of newtons force, at a set specific distance away from the device. A simple set of normal (gravitational based) weighing scales can be used for this. What you do is:
i.
you place a magnet a set distance above the scales top;
ii.
you then place the device on the top of the scales;
iii.
you measure the weight;
iv.
you take away the magnet and measure the weight again;
v.
you then calculate the weight difference to give an indication of the 'magnetic weight'.
It is suspected that magnetic resonance imaging (MRI) can be used to profile an electronic device, for security purposes. The question is whether the cost of performing such imaging can be sufficiently reduced for use by general users. It is suspected that it probably can. The "50x50mm Small Magnetic Field Viewing Paper" product available here appears like it might help in this regard.
Real and fake microSD cards, compared with photographs.
Straight-forward conventional photography using the visible spectrum can be used to check whether a certain device looks externally (and sometimes also internally) like it should (a form of quality control classed under visual inspection). With respect to both photographing the internals, as well convincing users that there is no embedded, hidden espionage technology, it is useful for the device to use transparent materials wherever possible[5].
T rays are analogous to x rays but instead involve infra-red. Infra-red filters appear to be readily available, at relatively low prices, for cameras (including those on mobile phones). T rays are capable of passing through plastic but not through metal; this property enables t rays to be used to detect metal weapons concealed on persons. Could it be possible that such t rays might also pass through silicon? If t rays do in fact pass through silicon and related materials, then perhaps cameras (including those on mobile phones) can be adapted to take t rays of electronic devices, as a form of security verification.
Microwave testing is already a tool used in engineering for performing non-destructive testing to find defects in technical parts. As it is presently used in engineering, it most likely isn't low-cost enough. However, could it be that a microwave oven, in conjunction with a smartphone, might be capable of being able to perform such testing? The project hosted here could possibly be useful for this.
Ultrasound images and/or readings, appear to be quite likely useful for security authentication of devices. Unfortunately, normally the price of the associated equipment is relatively high. However, there is such a thing as a DIY ultrasound imaging kit which could directly or indirectly bring down the cost of using this technology. Also, a simple ultrasonic sensor, which seems potentially to be much cheaper and affordable, may be sufficient for exploiting ultrasound to do security authentications. According to Wikipedia, ultrasound testing has been used for industrial non-destructive testing of welded joints from at least the 1960s.
Because of things like hidden cameras, you may want to enter security credentials (such as passwords), only in secure locations. For example, you may choose not to unlock your phone when in public places because of the increased likelihood of an adversary surreptitiously photographing or videoing the password you enter.
Also, certain geographies are more prone to attack because of various factors such as things of a political nature, as well as things to do with how powerful the local military are. In this respect, it may even be worthwhile to rely on geographies that are technologically backward, and that have weaker politics and/or political groups.
Adversaries may initiate their attacks only from a certain time onward. And perhaps we can make assumptions as to when those attacks were definitely not taking place. For sake of example, let’s say that we have an assumption that we were not being attacked six months ago and so were not compromised at that time. If some of the artefacts produced from historic activity dating from before six months ago were isolated six or more months ago, it might be possible to gain extra security drawn from such artefacts.
Nine months ago, I downloaded the public key for some website, and saved it in a way that kept it quite isolated from other systems from that time onward. For example, isolation may have been induced by printing out the key, and keeping it in a safe. Isolation could also have been induced by burning it to an encryptedread-only CD[6].
Now that nine months have passed, I find that adversaries are tampering with my internet communications so that I am downloading the incorrect key. In reliance on 'security based on time passed', I can retrieve the genuine key I saved nine months ago.
I often engage in email exchanges with a colleague who always writes their public authenticationPGP key at the bottom of the emails they send. Three months ago, an adversary started intercepting the emails and changing the key in all emails sent henceforth. By relying on 'security based on time passed', I can access the emails from my colleague from before six months ago (a time at which I know I had system security), and obtain the genuine key. I can then see that the key is now different and therefore figure out that something suspicious is now happening.
Note that ‘security based on time passed’ can have a vulnerability in respect of security holes in software. For example, software from 5 years ago may have security holes that were discovered and published since then, and from this perspective, it may be better not to rely on that historic version of the software, and instead try to obtain a newer version of the software securely (where the holes have been fixed). With this in mind, if using 'security based on time passed' in conjunction with software, it may be best only to use it with very simple software, where the associated algorithms and code have been thoroughly security researched and tested. This is to mitigate against the chance that security holes are discovered subsequent to your storage of the software under this principle.
The time taken to forge something, can be used in security mechanisms. For example, someone could place their mobile phone in a seamless ink-absorbing paper bag, the tearing of which is hard to repair through the use of things like super glue, and then sign a randomfountain-pen ink squiggle on it. The time taken to forge a replacement paper bag with the same squiggle, can mean that the phone is secured for several hours from the time of the squiggle.
Instead of entering your security credentials at any old time, a selected window of time can be chosen when security is high based on factors such as when hackers are awake, etc. The security credentials can be entered in that window of time, and they can be entered so that you don't need them again for a while. Eg. log-in to your computer at 6am because most hackers in your own time zone are perhaps asleep at that time.
Simple measures can be put in place to help remind oneself to stay secure. For example, when using a laptop, why not rest it on its wallet case (if it has one)? That way, you are reminded to place it in its wallet case when you close it up. Another simple measure is not to leave a key in an unlocked self-locking padlock—you no longer need the key, so put it away. That way, you reduce the risk of forgetting to store the key after you've used it.
Security reminder of resting Chromebook in its wallet whilst it is being used
Security measure of taking key out of self-locking padlock
The forming of security habits is another simple measure to help prevent forgetting to secure things. On day 1 and day 2, perhaps you make mistakes in your security routine; however, through simple repetition over many days, weeks, months, etc., you simply reduce more and more your mistakes because of the force of habit acquired.
Building your own devices and security can be good for security reasons. You can better make sure that hidden espionage technology (possibly embedded in materials) is not present when doing it yourself (DIY). The DIY principle can be even more effective when you choose to use very cheap and commonly available materials, because you can often be more sure that hidden embedded espionage technology is not in such materials. For example, why not think about possibly building a computer out of cardboard? You can pick cardboard at random, and be pretty sure no hidden espionage technology is embedded in the cardboard.
The Heads system[7] outlines a security principle that can perhaps be roughly described as 'destroy-key-when-attacked security'[8]. The trusted platform module (TPM) on a motherboard stores the encryption-decryption (symmetric) key that is only accessible upon entering the correct password. The Heads system will have the TPM destroy the key if incorrect passwords are entered too frequently (if it seems like the computer system is under attack [in relation to so-called rate limiting]). Once the key has been destroyed, the key should no longer be on that computer system. The true user will find out that the key has been destroyed, and then use their backup measures to retrieve the key (relying on the backup of the key outside the computer system). This security principle can be applied more broadly. For example, cryptocurrency (such as Bitcoin) keys can perhaps be safeguarded in the same way[9].
Relying on high production cost of certain security tokens
Perhaps a novel way to ensure better communication of things like public keys, is for an organisation to create a security token containing the key, that is very expensive to produce and replicate. It could be, for example, a genuine gold coin with the key on it, or some kind of holographic paper or card with the key on it. What would happen, is that the organisation would post out the security token to end users. The end users would then possibly pass the security token amongst themselves, or just post it back to the organisation. Because it would be so expensive to (re-)create the token, the token would not be thrown away, but simply shared in order to communicate the public key to end users. Adversaries would find it too costly to create fake tokens, because of the expense in creating the token.