Jump to content

Category:TODO/End-user Computer Security book

From Wikibooks, open books for an open world

Discussion pages   (labelled according to name of corresponding book page):

Book cover

[edit | edit source]

Book cover

Policies, guidelines, and guiding principles established for this Wikibooks book

[edit source]
  1. that begins with https://en.wikibooks.org/wiki/File:,
  2. that then has the name you obtained following that beginning text, and
  3. that then finally ends with the text ?action=edit.
In the file description page that appears, add the text {{End-user Computer Security/Categorise into book subcategory|book_category=Book:End-user Computer Security/Non-text media/???}} to the page, with the three question marks replaced with the path dictating the exact sub-category of the "Non-text media" book category, under which the media is appropriately categorised.
  • For the time being, the book deliberately does not have a downloadable version as the book's material is very much liable to change (including change classed as correction).
  • The book is moving towards a goal of not being "the book authored by MarkJFernandes", and instead of being "the world's book" (presumably like many of the other books hosted on Wikibooks).
  • Mostly, new ideas should be added to the different talk/discussion tabs of the book's main content. When ideas are associated with a particular chapter, those ideas should be in the talk/discussion tab of the particular chapter. MarkJFernandes may be able to do this work for you; in this regard, it is helpful for you to give him general copyright permission to facilitate this.
  • When readers read the book's sections, they should see up-vote and down-vote icons just next to the section headings. Readers can click those icons to up-vote and down-vote different sections, with the option of being able to add extra information pertaining to their up-vote or down-vote. Using those 'buttons' just after reading a section is perhaps a good idea in the case that a reader has clear sentiments regarding the section just read.
  • The book is more based in ongoing, never-ending, democratic collaborative research, than a treatise on an established subject. Particularly because of such, it is encouraged that entities that are in some way linked to parts of the book (which could simply be by way of the entity being mentioned), contribute to the work, even if that be just through peer review, and especially in respect of the parts connected to them.
  • In the book, there is the tendency to focus on principles rather than implementation-specific things. Naming a particular implementation is fine (for example, the Qubes implementation and the Google password manager are named), but probably such naming should be used for examples of concepts, rather than as the nuts-and-bolts of a concept. The Google password manager is documented in a little detail, but the underlying concepts could be applied to a variety of password managers (in theory).
  • The book does touch upon some theoretical ideas. For example, there is mention that the cross-signing of certificates might be a way to strengthen the current TLS-certificate-based security system.
  • With respect to the mentioning of implementations, often they should only be mentioned in the main body of the work if they are 'par excellence' instances of the related theories/principles put into practice. When the relationship to an implementation is not one of the implementation being a 'par excellence' instance, it is then perhaps better to mention the implementation in the Appendix, and then just link to the Appendix from the related section in the main body of the book. Alternatively, using footnotes might also be appropriate in such cases. It should also be considered whether it is perhaps better to completely leave out from the book (including from the Appendix) mention of an implementation connected with a particular theory/principle.
  • 'New security inventions requiring a non-trivial investment in new technology', should be documented in the Appendix, in the "New security inventions requiring a non-trivial investment in new technology" section.
  • Sometimes it is unclear as to how to integrate precisely certain ideas into the book, perhaps because the ideas haven't 'solidified' as definite nuggets of knowledge/understanding/information worthy of inclusion in the main content of the book. In such instances, it is perhaps better to build-up some more information on the related issues, before doing any such integration. If following this guideline, you can still record such ideas and notes, by simply adding them to the talk/discussion tabs of the book's pages.
  • It's important to add insights gathered from the practice of security concepts. The book is written tilted towards theory, mostly because MarkJFernandes didn't possess the practical experience for the various security ideas. This is perhaps a weakness of the first versions of the book but should hopefully be ironed-out through time, as the insights from practical experiences are more and more added to the book. The book was written out of necessity, mostly because there appeared to be a dearth of information on the issues covered.
  • The first versions of the book probably didn't focus that much on privacy issues, except in the sense of privacy of security credentials. MarkJFernandes wasn't so concerned with other privacy issues. Whether this should form an ongoing policy, or instead just be statements about the first versions of the book, is unclear.
  • MarkJFernandes is of the opinion that the general lack of meaningful resources for end-user computer security, is likely a hidden way for various groups to be able to spy on, and interfere with, people with ease (see the "Stop funding the spies and hackers" section).
  • There is an attempt to make the book unique (you might say with a "unique selling point" {USP}) in that it deals with inexpensive security. That's what MarkJFernandes wanted for himself, and what he believed would also be very much helpful to users all over the world (especially those who are not so wealthy).
  • Part of the original philosophy behind book:
  • There should be a transfer of sovereignty to citizens. Robust secure computing is likely integral to this. Computing is so important these days, that its general compromise is a threat to peace, democracy, education, and probably lots of other things. Legitimate policing organisations should probably be encouraged to be open and honest in their interrogations of suspected criminals, rather than allowing compromised technology to be prevalent as some kind of means for detecting and/or preventing crime. Honesty and integrity are vital. Lies and deceptions are generally not good. Lies and deceptions might be able to convict more criminals for longer sentences, but honesty and integrity might help to turn people away from crime in the first place.
  • Having democratic resources like wikis are good for involving people at the grass-roots level from different backgrounds, so that people's voices can be heard.
  • Computer technology is something of a fashion, and people adopt it because it is fashionable but not necessarily because it helps situations. It's acceptance is almost dogmatic. But the reality is, is that less tech is sometimes better.
  • MarkJFernandes tried not to get very much into politics in the book. He tended to follow the principle that a user should be able to use digital technologies safely and securely, regardless of their political opinions. In contrast, privacy rights are often violated by governments using the rhetoric that they need to be able to detect terrorism. The book goes for more of a human-rights, bottom-up perspective, where the government is instituted to protect human rights, instituted by the individual persons making up a people, and where persons in the messiness of everyday living can work out issues (such as terrorism issues) as they go along (they can help each other come to better thinking through means such as dialogue, dialogue facilitated by degrees of freedom in communication and thought).
  • MarkJFernandes generally favours things like social media, where grassroots opinions can come to the fore. He considers that social media is sometimes a good way to overcome propaganda.
  • Paranoia can be an effective tool and motivator for security development. The accusations of paranoia perhaps fail to see this: turn negative paranoia into positive security development.
  • Book was mostly originally written in respect of security for MarkJFernandes as a self-employed individual needing to use digital technologies. Partly it's a wiki because he knew how limited his knowledge and experience was, and because he wasn't an expert in the area of computer security. Still, he was quite shocked at how poor the prevailing advice on computer security seemed to be (almost as though there was something underhand in it).
  • Getting the opinions of people from widely differing backgrounds is good. A person having special circumstances, even those of being marginalised, can mean that their advice is particularly unaffected by the conflicts of interest often surrounding security advice.

     MarkJFernandes (discusscontribs) 10:06, 9 June 2020 (UTC)


Possible improvements

[edit source]
  • Possible improvement is to remove repetition in footnotes such as the footnote `as detailed later on in the section entitled “user randomly selecting unit from off physical shelves”.`
  • Possible improvement is to lighten photographs so that they stand out less. this may make reading such parts easier as the photos can otherwise potentially visually intrude too much.
  • Originally, wanted to keep each chapter on its own page, probably because I had thought that going back and forth between sections within a chapter was important for understanding material. Now am thinking that there is too much content on some chapter pages. Instead, probably is a good idea to have each gold-headed section on its own page (this would be more in line with how other Wikibooks books are structured). If doing such rearrangement, should be mindful that I have sent messages (such as to the Qubes mailing list) with links to sections in the present book structure, that may fail if I do such restructuring without establishing appropriate link redirects. Therefore, should remember that establishing such link redirects may be the right thing to do if doing such restructuring.
  • Headings beneath the level of gold-coloured headings, do not appear conspicuous enough, and the related sections don't appear distinguished enough. Adding extra formatting to improve this, is probably a good idea, where such formatting may include using different font colours, different fonts, bigger font sizes, indenting text beneath headings more than their headings, and using more vertical space between sections.
  • The vertical spacing between contents-page entries, on the contents page, could be improved. For the sub-section entries, the entries are too close to each other (not distinguished enough), and the contents page could do with greater grouping together of sub-section entries and sub-sub-section entries that fall within the same parent, along with greater distinction of the groupings from each other (they're too close to each other).
  • The blue colour used for hyperlinks, makes reading the text slightly difficult (perhaps the text becomes a bit ugly). This is partly because I have opted to use many hyperlinks (to provision better further additional reading related to the text). To improve this, it might be a good idea to use a colour for hyperlinks that is only slightly different from the colour used for un-hyperlinked text, or perhaps instead to add a very slight colour highlight to hyperlinks that then have a font colour that is the same as un-hyperlinked text. It should be noted that users may use book-reading skins, such that the colour used for un-hyperlinked texts may not simply be black. An alternative approach might be to warn users that there are many hyperlinks in the text, and that to find them they will have to hover their mouse pointer over text to see whether there is a hyperlink there; if there is a hyperlink, hovering over it will likely underline the link, display some hover text indicating the presence of a hyperlink, and display the hyperlink address in the web-browser status rectangle (often at the bottom of the window).
  • It might be a good idea not to incrementally change the main contents of the book for the next group of updates, but instead to lump all the updates together into one new version update of the book (version 2 of the book), and then to make all the updates together. This probably would make sense as there are corrections/improvements to be made that span much of the breadth of the book rather than being localised just to individual pieces of text. This may require saving proposed page changes in cloud storage, and then when ready, committing the page changes to the book all in one go.
  • The book doesn't make much use of images, partly because I was more interested in making sure the important ideas were in the book in at least some form, rather than providing extra inessential illustrations that to some extent beautify the text, and that has the aspect of improving the form of the ideas (rather than the substance). Looking at other Wikibooks books and other Wiki Foundation material, and also based on other thoughts about how to improve the content, I am now thinking that adding more images would be a good idea.

     MarkJFernandes (discusscontribs) 05:47, 27 November 2020 (UTC)


Wish list

[edit source]
  • Would be good if turtle link on each page (other than book-cover page) for navigating to the navigation controls at the bottom of each page, were animated when hovering the mouse pointer over it, such that the turtle appeared to be swimming forward. It looks like this can be done by simply animating from the current unicode string to a different unicode string. The following animation transition might be good:
Before After
    =𓆉         𓆉≅≅    
--MarkJFernandes (discusscontribs) 14:24, 27 April 2020 (UTC)
  • Would like to convert all gold-coloured headings into collapsible blocks, that are collapsed by default. I feel this would really make the book more usable. I can do this, but then the problem is that hyperlinks that link to anchors within such collapsible blocks, do not work when the blocks are collapsed (which is undesirable). It does look like that Javascript code can be used to expand such collapsible blocks before such hyperlinks are executed. However, I don't have specific guidance on how to do this for Wiki pages (for conventional websites, I could probably fairly easily do this). It does seem possible to do this for Wiki pages, perhaps with the use of some kind of book-wide `common.js` file however, it will likely take a while for me to figure out how to do this. It's best to leave this for now, and have it on the book's wish list.
--MarkJFernandes (discusscontribs) 14:24, 27 April 2020 (UTC)
  • Would be good to have a hover-over Wikipedia page preview, for all Wikipedia page links (like how they have on Wikipedia). Have made a feature request regarding this here. It seems likely that this functionality is already available, but it appears it may take a while to figure out how to use it. Best to place it on the ongoing wish list for the time being.
--MarkJFernandes (discusscontribs) 14:24, 27 April 2020 (UTC)
  • Book search currently doesn't return more than one result when more than one match occurs on a page. Such functionality is desirable for this book, where each chapter is stored on its own page. Have asked in the Technical Assistance reading room for help. Also would be good if icon and/or text for book search, matched colour scheme of book.
--MarkJFernandes (discusscontribs) 08:58, 28 April 2020 (UTC)
The Wiki transclusion functionality might be helpful in the implementation of such book search functionality.
--MarkJFernandes (discusscontribs) 08:28, 29 April 2020 (UTC)
The idea of restructuring chapter pages so that each section is on its own page, and then using transclusion to pool all chapter sections together so that a chapter can also be viewed on a single page, appears to be a good idea for implementing the book search functionality. With such in place, hopefully it would be possible to return several search hits for some search, even though the different hits be all for the same chapter. Another advantage of this approach, is that the page categorisation can be more refined. At the moment, a page containing much content, may be categorised into a category where there is only one small section on the page applying to the category. With this new approach, such course-grained categorisation can be avoided, with the knock-on effect of improving the categorisation system of the whole book.

     MarkJFernandes (discusscontribs) 06:04, 27 November 2020 (UTC)

  • Probably would be better to use paths indicating chapter number within total number of chapters, so that at top of page, this information is seen clearly in a large font size. For example, perhaps a path such as:
End-user Computer Security/Main content/Chapter_5_of_10:Some measures that are primarily physical
can be used. Probably best not to use folder names for this as users may get the impression that more than one page is contained in each chapter. Also, on each chapter page, the heading of 'Chapter n' where n is the chapter number, should probably be changed to 'Chapter n of 10'. A concrete example: 'Chapter 5 of 10'. This helps readers to get an idea of how far through the book they are, as well as how large the book is based on the amount of content that is on the page they are reviewing.

--MarkJFernandes (discusscontribs) 11:28, 28 April 2020 (UTC)

Upon reflection, might be best not to change path such that chapter number information is included in it. The reason is if the number of chapters change, or the chapter ordering changes, the URLs may then have to change again resulting in any links to the old URLs becoming broken (which is undesirable). Instead, see whether the pages can be customised so as to either suppress or shrink the path shown at the top of each page (for this page, the path text is currently 'User talk:MarkJFernandes/End-user Computer Security'). I think this is likely possible seeing as the table of contents on pages can be customised.

--MarkJFernandes (discusscontribs) 11:35, 1 May 2020 (UTC)

  • Would be good to have up-vote and down-vote buttons displayed next to each section heading, or similar web 2.0 features, so that users can easily indicate whether they like, dislike, agree, or disagree with a section. Has now been implemented in "Upvote_downvote_section_links" template.
--MarkJFernandes (discusscontribs) 15:20, 28 April 2020 (UTC)

See here for wish list related to up-voting and down-voting sections.
--MarkJFernandes (discusscontribs) 14:13, 30 April 2020 (UTC)

  • In the navigation controls in the footer of each page (module), a likely improvement is to have a re-sizeable <iframe> HTML element that displays (via linking) the contents of the 'Preliminaries' page. This would be an improvement because users would then not need to navigate back to the Preliminaries page each time, when wanting to access the contents, index, or foreword.

--MarkJFernandes (discusscontribs) 10:32, 1 May 2020 (UTC)

  • Make greater use of book-specific templates, by creating them just for this book, especially for the banner and footer of each page/module (the code of which is mostly duplicated for each page).

MarkJFernandes (discusscontribs) 15:08, 1 May 2020 (UTC)

  • It is possible to use CSS files for styling and it is not that it is only possible to use CSS in the style attributes of the Wikitext. Therefore, it would be a very good idea to use CSS classes for the different styles used in the book—it would reduce code duplication to make it just better code. The way that such CSS files can be used, is exampled in the Template:End-user_Computer_Security/Upvote_downvote_section_links. The <templatestyles> tag has a wrapper attribute that can be set, so that in the final page rendered to users, all <div> elements with their class attribute set to some value you set, have the related CSS-file styling applied to their contents. See Extension:TemplateStyles for more about this.

MarkJFernandes (discusscontribs) 09:30, 28 May 2020 (UTC)

  • Perhaps replace texts like 'section entitled' with the section symbol (§) for conciseness and perhaps better readability? If doing this, remember to add hover text so that user can hover over symbol to get some explanation as to what it means (HTML title attribute can be used for this).

MarkJFernandes (discusscontribs) 10:13, 5 May 2020 (UTC)

  • As indicated at Wikibooks:Reading_room/Technical_Assistance#Book_search_that_lists_more_than_one_result_per_page, in order to facilitate section-sensitive search results, a chapter can be split into its constituent sections where each section is stored on its own page, and then through transclusion, the chapter can be reconstructed onto a single page. I mooted this idea about a month ago (as can be seen at the just-mentioned link) and no one has given me particular feedback on it. I'm inclined to believe that the idea would work in practice because it hasn't yet received any negative feedback. Additionally, If the chapters are split up in such ways, Wikibook CATEGORIES categorising can be performed in a section-sensitive way, which seems like a really good idea. Therefore, doing such chapter conversions is on this wish list. When a section is placed on its own 'standalone' page, it should also contain a link that takes the user to the section as transcluded on its chapter page; this way, when users traverse links taking them to such 'standalone' pages, they can easily get to how the section should properly be read on its chapter page. In fact, automatic redirects to the parent chapter page may be even better than such links.

MarkJFernandes (discusscontribs) 15:26, 26 May 2020 (UTC)

Userboxes on main page?

[edit source]

Hello!

I just noticed that userboxes on the main page of this Wikibook have caused it to be included in at least one userbox category. Is there a way to make it so the userboxes appear, but are not added to the category?

Thanks! --Mbrickn (discusscontribs) 14:34, 23 June 2021 (UTC)

Preliminaries section

[edit | edit source]

Preliminaries section

Free software & other free computer resources

[edit source]

Since emphasis is made on inexpensive security in this book, should there be a section or other information, dedicated to free software, and other free or low-cost computing resources?

--MarkJFernandes (discusscontribs) 11:52, 16 April 2020 (UTC)


Add a section called "Computer-driven event logging"?

[edit source]

Logs can be used to uncover hacking attempts whether they were successful or not, and can point a user in the right direction for where extra security may be needed in their systems.

Trammell Hudson briefly deals with whether computers should be completely shutdown, or suspended, in relation to security (see https://trmm.net/Heads_FAQ#suspend_vs_shutdown.3F). This comparison can be extended to whether computers should be powered-on, or powered-off, in relation to security. It may well be better for a computer to be powered-on as in such a state, it can be more difficult to carry out certain classes of attack. In conjunction with a computer being powered-on, computer-driven event logging can be activated, to provide even more security.

--MarkJFernandes (discusscontribs) 13:54, 16 April 2020 (UTC)


Change section title "Psychic spying of password" → "Spying by mind-reading"?

[edit source]

Perhaps it would be a good idea to broaden the scope of this section by renaming it so that it is for the broader topic of mind-reading.

MarkJFernandes (discusscontribs) 17:31, 8 May 2020 (UTC)


Add section entitled "File-based security" somewhere?

[edit source]

Such a section would deal with the issues of malware in files, digital signing of files, secure communication of files, the backup of files, and probably a few other "file-based security" related issues. Section could be placed in the "Chapter 10: Miscellaneous notes" chapter, but then again, it might be better to turn this section into a chapter in its own right. Doesn't seem appropriate to place the section in any of the other chapters.

MarkJFernandes (discusscontribs) 09:13, 16 May 2020 (UTC)


Index not complete as of 3rd June 2020

[edit source]

Realised that index would take quite a long time to complete, so decided to leave it in an unfinished state on the book page of this talk page (the Preliminaries page). Might ask for crowd-funding to fund its completion.

     MarkJFernandes (discusscontribs) 08:57, 3 June 2020 (UTC)


PS/2 keyboards are more secure than USB keyboards

[edit source]

According to Micah Lee in the Qubes OS video hosted here (go to 29m:47s), using USB instead of PS/2 for plugging in your keyboard, is something of a security risk. This has already been mentioned in passing in the §"Pros vs Cons" under the section dealing with whether Raspberry Pi Zero devices should be used as secure downloaders. But it should probably also be documented as standalone info in the book, perhaps in the "Miscellaneous notes" chapter.

     MarkJFernandes (discusscontribs) 10:50, 6 June 2020 (UTC)


Sources of computer-security information that perhaps provide further content for book, whether through direct inclusion or by indirect hyperlinking

[edit source]

The following sources were suggested by the Qubes-user "Catacombs":

     MarkJFernandes (discusscontribs) 10:41, 10 June 2020 (UTC)


Dealing with the situation where you want to work with potentially security-compromised equipment/software

[edit source]

Dealing with such situations is related to the "What to do when you discover your computer has been hacked" chapter.

Dealing with such situations is related to the note linked-to here entitled 'additions for "Sandboxing and cloud computing" section'. With that note in mind, old second-hand potentially-compromised smartphones and cameras might be able to be used simply for the purpose of capturing photos in order to send them to an on-site local printer for printing. Once printing completes, visual inspection of the print-outs might then be able to identify adequately whether the print-outs were sufficiently correct (you could perhaps compare print-outs to what was photographed, simply using your own vision, in order to ascertain accuracy), thus perhaps overcoming the potential security weaknesses of such potentially compromised technology.

Sand-boxing and cloud computing also work to contain the ill-effects of malware (possibly hidden within software), so that your other system components don't get damaged. This is touched upon in the §⟪Sandboxing and cloud computing⟫.

Personally, I am contemplating whether it might be possible to use the potentially compromised BIOS/UEFI firmware of my laptop. The COVID-19 situation doesn't help such potentially compromised circumstances.

I have wondered whether my laptop without Wi-Fi and Bluetooth capabilities, as a consequence of removing the Wi-Fi+Bluetooth card, is perhaps fairly secure so long as it is not networked, not connected to other computers/devices, and doesn't have malware on any of its disks or other connected media, even if there should be malware and/or backdoors in the firmware (including the BIOS/UEFI firmware) and/or the hardware. Such malware and/or backdoors, are perhaps not capable of facilitating the deceptive altering of natural-language texts without the aid of outside interventions (interventions such as those through wireless communication), because of the byte-size limitations of such security compromises. The amount of code you can fit in the BIOS/UEFI firmware is limited; such limitation seems to be a security principle for reducing malware potential (it might be worth documenting such principle elsewhere in the book, perhaps in the Digital Storage chapter). Such limitation is exploited in the security invention mentioned in the "Design feature for enabling the detection of malware in BIOS firmware" on the talk page of the "New security inventions requiring a non-trivial investment in new technology" chapter. However, it may still be possible for such malware in the firmware, to insert random snippets of back-door code in executable files. In light of such, if transferring executable files off the computer (such as perhaps as the result of a software-development project), it might be a good idea to run antivirus scans on such executable files. I suppose it could even be possible for such malware to insert random snippets of back-door code into source code. In such regard, it may be a good idea to run some checks on the code as present on a safe and trusted computer, after the code is copied to such computer. If such code then needs to be compiled, the user wouldn't compile it on the main computer, but somehow the copied-over code that had been checked (not the code on the main computer which could still be compromised due to malware) would be compiled on a safe system; cloud computing could perhaps be used.

In the situation where your main computing device may be compromised to some extent, getting your internet connection through an intermediate device acting as a kind of security buffer (like a canal lock or decompression chamber), might be a good idea. Such is dealt with at least in part, in the note linked-to here entitled 'Having intermediate device for internet connection might be more secure?'.

Potentially-compromised laptop, hard disks, and memory sticks, with labels instructing as to how to use them safely

Cryptography can be leveraged to enable the safe use of potentially unsafe devices. Essentially, what happens is that the unsafe device only ever works on encrypted data, and is unable to decrypt that data. Such leveraging is perhaps used in the OS-level encryption of external SD cards in smartphones. SD cards in general are particularly worrying from a security perspective for a variety of reasons. However, if malware and/or malevolent hardware is contained on a compromised SD card, it likely can still be used safely if: i) it is only used for storing encrypted data; ii) it is impossible for any "maltech" on it to be able to get the decryption keys or the decrypted data; iii) and the keys aren't otherwise compromised. This can perhaps be partly implemented by only decrypting data on the SD card in the following way: 1) use a Nitrokey product for cryptographic services; 2) copy encrypted data off the SD card to RAM; 3) physically disconnect the SD card before finally decrypting the copy of the encrypted data, that is stored in the RAM. When reconnecting the SD card, there shouldn't be any decrypted data accessible by any "maltech" in the SD card. It should be noted that historically encrypted data can be hidden on SD cards, so if old cryptography keys become compromised, this could pose a risk to flash memory dating from when the old keys were in use, even if you elected to deep low-level format such media. Please consult the "Digital Storage" chapter for more about the risks of using SD cards.

     MarkJFernandes (discusscontribs) 19:28, 15 December 2020 (UTC)

«Software based» chapter   (chapter 1)

[edit | edit source]

«Software based» chapter   (chapter 1)

Improvements/additions for "Sandboxing and cloud computing" section

[edit source]

Perhaps mention that sand-boxing might work well for you if the following condition is met:

  1. any malicious modification of the user files you use in such computing, is automatically tamper-evident.

This might be the case when doing certain graphics work. Perhaps examining the produced graphics files is enough of a quality-control mechanism, such that we don't need to worry about malware and the like so long as the produced files look okay?

Can additionally mention that cloud computing might be good for you if in addition to the just-mentioned sand-boxing condition applying (cloud computing in some sense is also a kind of sand-boxing), the following condition is also met:

  1. whether or not the files are stolen is of no concern to you.

In some cases of cloud computing, you may have faith that the software functions as advertised, but be unsure as to whether your user files may be stolen. In such cases, the first sand-boxing condition above, perhaps can be ignored.

More broadly, safe and unsafe systems can be used together, where the safe system can be used to verify the output/work of an unsafe system. Such would only be advantageous if the work of the safe-system verification and the unsafe-system work, cost less than simply doing the work on the safe system. Such might be the case where a user has access to extremely powerful computing resources that are considered to be unsafe, who then also has a safe, but not powerful, system that can be used for verification. The type of work is relevant. For certain things, like perhaps 3D-rendering a scene, perhaps verification must take on the aspect of simply performing the work a second time on the safe system and then comparing for correctness. Writing articles might also belong to this class of activity. Bitcoin mining on the other hand, probably is computationally expensive to do, but cheap to verify, so perhaps such a 'safe-unsafe systems' set-up would work for such mining.

     MarkJFernandes (discusscontribs) 10:42, 7 October 2020 (UTC)



Part of the security risk in preinstalled software is....

[edit source]

Part of the security risk in preinstalled software is that it isn't shrink-wrapped and has no holographic security seal? Should such thoughts be incorporated into the text?

MarkJFernandes (discusscontribs) 08:06, 13 May 2020 (UTC)

Origin of the idea behind the "Malicious sneaky replacement of FDE system with historic clone of system ... " attack

[edit source]

I initially thought that this attack was described by Trammell Hudson in his 2016 33c3 talk, hosted at https://media.ccc.de/v/33c3-8314-bootstraping_a_slightly_more_secure_laptop . But when trying to find the relevant part of the talk later on, I found I couldn't find it. Class of attack probably has a name specially designated to it by the security community.

MarkJFernandes (discusscontribs) 13:59, 21 May 2020 (UTC)


Mention 'www.offidocs.com" & "www.onworks.net" in "Sandboxing and cloud computing" section?

[edit source]

Can make specific mention of https://www.offidocs.com and https://www.onworks.net that provide many very powerful and useful free cloud-based software (under 'easy' software licences) free of charge (including Linux installations).

     MarkJFernandes (discusscontribs) 08:43, 7 July 2020 (UTC)



Add info about ReactOS, to §"Which OS?"❓

[edit source]

The ReactOS operating system, is an alternative way to run Windows programs when compared with WINE over Linux, that is either more secure than Windows, or constitutes a path to more security when compared with Windows. See https://reactos.org/wiki/ReactOS#Secure and https://reactos.org/forum/viewtopic.php?t=17226 for more info. Windows 10 is in the Windows NT family.

     MarkJFernandes (discusscontribs) 11:07, 2 June 2020 (UTC)



Catacomb's note about Tails Linux working on reproducible builds

[edit source]

Qubes-user "Catacombs"'s note (paraphrased by MarkJFernandes):

"Tails Linux is working on reproducible builds, but it isn't yet implemented. Instead, Tails Linux's current verification scheme is by a Firefox add-on extension. It works by verifying that the file I downloaded of the Tails Linux OS is the one that matches the image signatures provided by the extension. This puts trust in the Firefox system, and in a connected way in the HTTPS system (to the extent of deeming the HTTPS system as being infallible). My thoughts are that we could generate an additional encryption layer on top of the HTTPS system, for items requiring greater security than simply HTTPS alone. The added layer would have more sophisticated encryption than the HTTPS system, and would use another set of security cryptographic certificates (other than the TLS certificates that HTTPS uses). Using some kind of encrypted token might be an idea, where only those users possessing the token are able to pass through the security."

MarkJFernandes's current response in respect of having an additional encryption layer:

"Hmmm. I think that usernames and passwords already add the second-level of security you're outlining (unless I'm misunderstanding you). As for encrypted token, two-factor authentication and two-step authentication probably effectively facilitate such second factors. Such authentication is dealt with in the book here."

     MarkJFernandes (discusscontribs) 16:12, 9 June 2020 (UTC)


Catacombs's note about what is perhaps Catacombs's most secure laptop/tablet/smartphone

[edit source]

Qubes-user "Catacombs"'s note (paraphrased by MarkJFernandes):

"Curiously, I bought an old Android device, and then used the MrChromebox.tech script to put coreboot/SeaBIOS on it to enable me to boot Linux on the device. Now if I boot Tails Linux on the device, rebooting each time I have a different computing purpose in mind, it is perhaps the most secure computing device I have, although I do worry about having to trust Google won't find a way to feed all of my internet typing actions back to their servers."

MarkJFernandes's current response to note:

"This note can perhaps be integrated into the main content, but it might be best to build-up some more information on these issues, before doing any such integration. It's important to add insights to the book, that are gathered from the practice of security concepts."

     MarkJFernandes (discusscontribs) 16:36, 9 June 2020 (UTC)


Add mention of Puppy Linux to "Which OS?" section?

[edit source]

The Qubes-user "Catacombs" has highlighted Puppy Linux amongst just a few operating systems also mentioned by "Catacombs", as particularly providing certain security features, features that "Catacombs" appears to imply are distinct to a certain extent, and that are not present in Qubes.

"Catacombs"'s thoughts on Puppy Linux (paraphrased and with some elaboration, by MarkJFernandes):

"Puppy Linux users seem to think that what they call a multi-save optical disc, is a highly secure way to work. What they do, is they re-install the Puppy Linux operating system for each user session (even if it means re-installing an old version of the Puppy Linux OS absent of the latest Puppy Linux updates). In some ways, this is similar to the Qubes OS, in that in Qubes OS, a temporary VM is destroyed after completion of the specific use case for which the VM was created (and always by the time of reboot as well as by the time of shutdown, of the computer). With Puppy Linux the user can choose not to save any information after user sessions, which means that session-to-session use can be completely non-preserving of state. Since Puppy Linux is completely loaded into RAM for each session (without it being installed to any of the local drives), it is slow to boot, but it does run fast. The saves on the optical disc (CD or DVD) can have additional programs, program upgrades, and the user's personal files. During a user session, a user can opt not to save to their multi-save optical disc; they might choose this if they suspect the session may have been compromised in some way.
Puppy Linux works without requiring the distinction of the root user as being set apart for system actions, operations and procedures normally segregated due to their increased risks to OS integrity. Users feel this is just fine, as one gets a new copy of the OS with every boot."
It used to be that all of Puppy Linux could be started with a video-display option, where the work of the display driver would be carried out by the main processor (rather than by video chip[s] and graphics card[s]). Yes, it's true, that most of the display drivers are available for the various video chips and graphics cards around, but such driver bypassing prevents drivers from doing things considered to be anti-secure and anti-private: it makes the system more secure. The same measure could be implemented in Qubes, but then who wants a slower Qubes?"


MarkJFernandes's thoughts on this:

"    Related to:
‣ optical-disc info in "Conventional laptops" subsection of the "Factory resets" section.
‣ security advantages outlined in the "Rewritable media vs optical ROM discs" section.
‣ following excerpt from §"Regarding operating system":
Some general security advice in relation to using an operating system, is for users to have an administrator account that is different to the standard account users use for everyday computing. The standard account has fewer privileges. The more powerful administrator account, that also has higher associated security risks, should also have a “minimised” exposure to risk due to it being used only when needed—when not needed, the less risky standard account is instead used.
"


     MarkJFernandes (discusscontribs) 08:30, 10 June 2020 (UTC)


Probably a section on internet-security software and anti-malware software should be added as a gold-coloured-heading section to this chapter...

[edit source]

Examples of such software: Little Snitch; WireShark; Norton Internet Security; McAfee anti-virus software.

     MarkJFernandes (discusscontribs) 09:34, 10 June 2020 (UTC)


Add section called "Communication software" to this chapter?

[edit source]

Email is well known as not being a secure method of communication. This can be documented in a new section added to this chapter, called "Communication software". The section can mention how email can be made more secure with PGP encryption and signing. The section can then go on to mention about the different software available that offers end-to-end encryption of communications (such as Skype). Mention can also be made about how insecure mobile and telephone networks appear to be (because intermediate call centres apparently can listen-in on such communications).

     MarkJFernandes (discusscontribs) 10:00, 10 June 2020 (UTC)


Is there a security principle of "software-less hardware", and if so, should it be added...?

[edit source]

I'm currently working in my ideas on the idea of a software-less computer system, that you purchase or establish. The system can come with software already loaded, but it then ought to be made software-less by wiping it clean of software. This includes not having software in the firmware, especially the BIOS/UEFI firmware. Once such a system is established, the user then downloads all the software they require (using their secure communications device, as described in the §⟪Regarding how to obtain software⟫), and they then proceed to install the software for the system. The user later on, can wipe the system to a clean state again, and reinstall afresh for security reasons. The system doesn't necessarily need to be placed in a "blank" state, but any software on it must be wiped off in the process of reinstalling the software for the system.

The reason why I'm thinking there is a security principle in this, is that it splits the issue of establishing a secure system into two distinct parts that appear to be able to be dealt with individually in effective ways for the purposes of establishing security. The hardware can be verified using a variety of verification methods, many of which are documented in the ⟪Broad Security Principles⟫ chapter under §⟪Measuring physical properties for authentication⟫ (including simple visual inspection). Hardware tampering is likely much more rare simply because of the nature of hardware when contrasted with software, and it is likely easier to detect than it is for software tampering. Because software tampering may be hard to detect, and easy for adversaries to do[1], it is probably a good idea simply to download all the software using a secure communications device. The §⟪Regarding how to obtain software⟫ provides general information on how to obtain software securely. Splitting the task into these two distinct activities, seems to constitute a security principle for the establishment of secure systems.

If such a security principle does indeed exist, then it may be worthwhile adding information about it to this book, perhaps to this chapter.

Considered whether BIOS firmware (and also other firmware) was perhaps mostly protected by both not allowing re-flashing, and also by insisting in the update process, that updates be cryptpographically signed with a private key only known to the vendors of the firmware software. Briefly researching this, it does appear that such is officially advised, in the form of NIST guidelines (see https://cts-labs.com/secure-firmware-update). However, because the `flashrom` software appears to be very widely supported by the different motherboards available, and because of the information here, it does appear that BIOS/UEFI vendors mostly don't implement such protocol (which is perhaps quite worrying).

The concept of "software-less hardware" is related to Joanna Rutkowska's paper "State considered harmful" (subtitle "A proposal for a stateless laptop") dated December 2015.

     MarkJFernandes (discusscontribs) 17:50, 3 November 2020 (UTC)


There are other kinds of bootloaders other than BIOSes and UEFIs, as well as similar security threats based in other kinds of firmware (such as in the firmware chips of graphics cards) so perhaps material should be extended and generalised to cover....?

[edit source]

There are other kinds of bootloaders other than BIOSes and UEFIs, so perhaps material in this chapter should be generalised to cover also the other kinds of bootloaders. The Raspberry Pi is an example of a computing device that uses a bootloader that is neither a BIOS nor a UEFI.

Similarly malware in the BIOS/UEFI firmware, isn't the only firmware point of weakness in computer systems. You get firmware for all kinds of things, from disk drives, to network cards, to graphics cards, to memory sticks, and on and on. Malware can be in all these other firmware, and may use different microchips to the BIOS/UEFI firmware microchips. Probably the "Security of BIOS/UEFI firmware" section in this chapter should be extended and generalised to cover these other threats.

     MarkJFernandes (discusscontribs) 14:34, 19 October 2020 (UTC)


Raspberry Pi device can be used to flash the ROM chips on other devices (such as a laptop)

[edit source]

This is another advantage of the Raspberry Pi device in relation to using it as a device for secure downloading. The only additional things that are needed appear to be wires and a SOIC-8 pomona clip; these things appear to be mostly safe to use, in the sense that hidden hardware cannot mostly be hidden in them (not the case with microchips, for example). See here for info on how a Raspberry Pi can be used in this way. It would seem that this method, effectively turns the Pi device into a USB (flash) programmer, but perhaps unlike USB programmers, you can purchase the equipment securely, i.e. can thwart MITM attacks by picking a random unit from a shelf in a physical store—not so sure you can buy USB programmers in this way. Probably this should be added as one of the advantages in the ⟪Pros vs Cons⟫ section.

     MarkJFernandes (discusscontribs) 16:35, 15 October 2020 (UTC)


Wherever the security advantage of the principle outlined in §⟪User randomly selecting unit from off physical shelves⟫ is mentioned...

[edit source]

Wherever the security advantage of the principle outlined in §⟪User randomly selecting unit from off physical shelves⟫ is mentioned, such as in this chapter in respect of smartphones, and then again in respect of the Raspberry Pi device, as well as in other places in the book, probably mention should also be made of the principle outlined in §⟪Ordering many units of same product⟫ especially when the item to be purchased is cheap. For example, ten Pi devices can be bought from the same store, and then nine random units returned, to ensure better that the one you have bought hasn't undergone any tampering. The security advantage derived from this second principle, seems to be significant.

     MarkJFernandes (discusscontribs) 16:05, 29 October 2020 (UTC)


Include implementing extra sandboxing for closed-source blobs, under the §⟪Sandboxing and cloud computing⟫?

[edit source]

Closed-source blobs, as pondered in the discussion under the Raspberry Pi forum topic "Secure computing using Raspberry Pi for business purposes", can be perceived as particular security concerns of a computer system. One potentially novel approach to dealing with them, is to reverse engineer them, and then implement extra sandboxing in-code on the extracted source code, to limit their potential harm. It seems that it is probably legal to do this under UK law, so long as it is done privately and the user isn't under a contract preventing him from doing so—see section 50C of the Copyright, Designs, and Patents Act 1988. Additionally, it might also be legal for such users to release the source code modifications (not the modified source code) in the form of a patch so long as the patch doesn't constitute an infringing "copy" of part or all of the closed-source blob, for others also to be able to patch their closed-source blobs in the same way (thus saving on the work done for implementing such sandboxing across the entire user-base). The sandboxing doesn't necessarily only need to take the form of code additions and code rewriting; it can also take the form of simply deleting portions of the closed-source source code, portions deemed unnecessary for some users, where leaving them in would only increase the attack surface and/or potential vulnerabilities in the closed-source blobs (see ⟪Do avoiding "bells and whistles", trying to be "barebones", and reducing power & capability, constitute a broad security principle?⟫ note for more about this).

The open-source software me_cleaner appears to implement this principle, by modifying the Intel ME closed-source firmware blob (a closed-source firmware that is controversial due to perceptions of it being a potential security vulnerability) to reduce its scope for inflicting or enabling damage to a user's computing activities.

     MarkJFernandes (discusscontribs) 17:27, 15 December 2020 (UTC)


JTAG interfaces (perhaps through a JTAG port) can possibly be leveraged to flash more easily firmware into ROM chips on systems that support JTAG

[edit source]

See https://en.wikipedia.org/wiki/JTAG#Storing_firmware. JTAG has been identified as a security risk because of such ability, but actually, it could in fact be an advantage. Being able to reinstall firmware is appearing to be a very good security precaution, and without JTAG this is perhaps more difficult especially when malware is already in the target firmware chips. The standard firmware upgrade utilities may not be capable of removing malware when it is already in the pre-existing firmware. In such cases, it is prudent to wipe clean the pre-existing firmware, to get rid of any pre-existing malware. The JTAG interface might more easily facilitate such wiping as well as the re-installation afterwards of genuine firmware code. Having unpluggable BIOS firmware ROM chips may not help, as if you swap out the existing chips with blank chips, the system without JTAG or a USB programmer, may be incapable of facilitating the re-installation of the firmware code—without a BIOS due to having only blank BIOS-firmware ROM chip(s) after swapping out the chip(s), your computer system perhaps won't start nor get to the point where new firmware can be installed. An alternative to JTAG is to use a USB programmer where you "manually" wire-up the programmer to the pins of the ROM flash chips. However, such alternative may not be as easy as using any preexisting JTAG port, in consideration of the "manual" wiring that seems to be required when using USB programmers.

In light of these thoughts, it may be a good idea to use hardware that has a JTAG port.

     MarkJFernandes (discusscontribs) 11:33, 9 November 2020 (UTC)


Using firmware-chip sockets may be a good idea, for security reasons; mainboards with built-in mechanisms for 'properly' wiping pre-existing firmware stored on chips, may also be good

[edit source]

The Coreboot documentation indicates that if desoldering flash firmware chips for the purpose of installing Coreboot, it is recommended that the soldered-on chips be replaced with a flash socket that instead uses pluggable-and-unpluggable flash chips. Particular mainboards that "off-the-shelf" have such sockets and socketable chips, can be used to save on the work involved in doing such replacement yourself (the ASRock H81M-HDS, ASUS F2A85-M, and Foxconn D41S mainboards, all use socketable flash). From a security perspective, such socketable flash may be a good idea, in terms of being better able to ensure the integrity of the firmware. For example: you can go about creating several back-up firmware chips, that you securely store in different remote locations; if ever intrusion is detected, you can then simply replace your socketed firmware chip with one of your trusted backup chips. Without a socket, the alternative process may involve the labour of desoldering the present chips, and/or fiddling with a USB programmer at the "point in time" when intrusion is detected; with a socket, you can potentially do the work beforehand, and save on labour at the "point in time" when intrusion is detected.

Some mainboards have built-in mechanisms for 'properly' wiping pre-existing firmware stored on chips. This again may be good for security, and it doesn't seem that all systems have this facility. Some systems, appear only to have mechanisms in place for updating and upgrading pre-existing firmware code (rather than a proper wiping); unfortunately, if malware is already in the code needing updating, such mechanisms may not remove such pre-existing malware. Apparently, most Linaro boards have such mechanisms for properly wiping pre-existing firmware stored on chips—see here.

Add info to §⟪Security of BIOS/UEFI firmware⟫ about write-protect physical switches potentially being useful for protecting firmware....?

[edit source]

Because firmware may be able to be altered during normal OS operation, or during boot time, by other software, it could be a good idea to employ write-protect physical switches to prevent such from happening. In my Chromebook C720, there is, for example, a write-protect screw ostensibly for making the firmware, or portions of it, read-only.

     MarkJFernandes (discusscontribs) 17:12, 15 December 2020 (UTC)


Info for §⟪Security of BIOS/UEFI firmware⟫; might be easier to secure firmware on mobile devices, when compared with securing firmware on larger, more conventional kinds of computers

[edit source]

It might be easier to reinstall, in a proper way, the firmware for mobile devices (such as smartphones and tablets), than for other computing devices, due to it perhaps being the case that for mobile devices, all the firmware is usually located within a single ROM chip on such devices. With other computers (including laptops), there may be several different chips each containing its own separate firmware where malware may be present. I have personally found this to be the case with my Chromebook and my laptop: there's the network card firmware to consider, the firmware of the SSD or HDD also, the BIOS firmware also, the graphics card firmware also, etc.) Whilst such easy reinstallation is desirable, it perhaps means that the single firmware chip is also a more potent point of attack for adversaries (due to the highly integrated nature of the computer system that constitutes the mobile device). Have asked question concerning this paragraph @ https://security.stackexchange.com/q/244266/247109

It appears that there is a convenient mechanism for faithfully reinstalling the firmware of certain Lenovo tablets, such that pre-existing malware gets wiped, simply by installing via the tablet's USB socket with connection to another uncompromised computer. See https://androidmtk.com/download-lenovo-stock-rom-models.

     MarkJFernandes (discusscontribs) 17:41, 15 December 2020 (UTC)


Raspberry Pi used as a secure downloader perhaps doesn't have much of a disadvantage based in needing to acquire a secure VDU...

[edit source]

The con listed in the Wikibooks book regarding whether a Raspberry Pi can be used as a secure downloader, in respect of needing to acquire a secure VDU, perhaps isn't so much of a con. If you are just running the Raspberry Pi OS for downloading files over public HTTPS URLS to an SD card, probably little to worry about if someone is fiddling with your VDU images so long as you enter no confidential information. The same may also be true if after downloading such files, you are writing data images to removable media. The OS run on the Rasp Pi device can be hardened so that it is more resistant to attacks focused on meddling with VDU images.

     MarkJFernandes (discusscontribs) 17:56, 15 December 2020 (UTC)


It might be worthwhile being explicit that the risk of the hardware being or becoming compromised, also includes the risk of firmware modification. The distinction between software and hardware is somewhat blurry. So it might be worthwhile changing "the risk of the hardware being or becoming compromised is very low" to "the risk of the hardware and firmware being or becoming compromised is very low". Protecting the BIOS firmware might entail simple measures such as the use of a BIOS password. However, removing the CMOS battery would probably wipe the password (I would have thought). Using the Heads BIOS/UEFI boot firmware system such that the TPM is used to secure the firmware code, would probably be better for securing the BIOS/UEFI firmware.

Probably the often touted way for securing the installed operating system, is to use the Secure Boot protocol (where the system disk is locked to the particular BIOS/UEFI firmware by means of cryptographic signing). Documenting this as another method might be worthwhile. However, Secure Boot may not be all that secure. According to the document at http://www.c7zero.info/stuff/DEFCON22-BIOSAttacks.pdf , attacks do exist against 'Secure Boot'-enabled UEFI set-ups. In fact, it might be that Secure Boot is quite weak in some regards. In this respect, the advice of this Wikibooks book that the bootloader be physically secured separate from the computer system, seems likely still to hold true in spite of any perceived security benefits from 'Secure Boot'. More generally, basic security principles seem likely to provide much better security than certain 'technical computer wizardry'. A 'back to basics' security approach is perhaps needed, and perhaps "missing from the vocabulary" of people coming from a background strongly based in the technical side of computer things.

     MarkJFernandes (discusscontribs) 18:56, 15 December 2020 (UTC)

«Passwords and digital keys» chapter   (chapter 2)

[edit | edit source]

«Passwords and digital keys» chapter   (chapter 2)

Add section under "Digital cryptography: security certificates, keys & tokens" entitled something like "PGP signing and encryption for secure communications"?

[edit source]

There isn't any information on how to use PGP signing and encryption for secure communications (such as secure email communication, and secure forum posts). Probably such information should be included somewhere. The aspect of 'security by time passed' in the repeated communication of public keys is covered somewhat in "Broad security principles" (chapter 8)-§"Time based"-§"Based on time passed"-§"Example 2" however, that is only in passing, and not a full exposition on how to use PGP signing and encryption for secure communications.

MarkJFernandes (discusscontribs) 08:24, 7 May 2020 (UTC)


Add information under "Password security" about how security is affected by the 'keep me logged-in' option (available when logging into online accounts)?

[edit source]

Choosing 'keep me logged-in' can perhaps help to reduce the attack window (and so also the attack surface) associated with the interception of passwords entered during log-in. However, if you accidentally leave your computer unlocked, then it can also leave some of your online accounts exposed that wouldn't be if you had not invoked this option on those accounts.

MarkJFernandes (discusscontribs) 08:45, 7 May 2020 (UTC)


Add non-incidental content on crypto-shredding and the Heads “Destroy key when attacked” functionality?

[edit source]

The Heads “Destroy key when attacked” functionality is probably something usefully documented in the "Passwords and digital keys based / Chapter 2" section because it is a good way to secure digital keys. If documenting under the section, having a link to the broader “Destroy key when attacked” security principle (in the Broad Security principles chapter) would probably be a good idea.

Likewise, the practice of crypto-shredding for the purposes of increasing security, probably ought to be also documented in the "Passwords and digital keys based / Chapter 2" section. For example, a user may encrypt some important asset, crypto-shred the encryption keys, and then only be able to recover the data by relying on their key backups. By doing this, they may have effectively increased the security protecting the asset considerably (at the price of it being more difficult [but not impossible] to recover the asset). Such backup methods perhaps might be good for protecting BitCoin wealth where it doesn't matter if it takes a while to recover the wealth.

MarkJFernandes (discusscontribs) 16:50, 15 May 2020 (UTC)


The distinction of 'psychic' within the broader topic of 'thought reading and control' is perhaps pointless?

[edit source]

See discussion page for the "Mind-reading attacks" chapter.

     MarkJFernandes (discusscontribs) 08:30, 2 June 2020 (UTC)


"Psychic spying of password" section is missing mention of protection using MFA

[edit source]

Such is missing most likely because it was first thought that it was significantly covered under password encryption. However, that is not the case. Probably a new sub-section should be created to document such protection. After such documentation, probably the "This principle is somewhat related to the later “Protection using password encryption” subsection." footnote, as well as probably the "Also, the use of security tokens (such as USB security tokens[1]) as well as the use of password encryption, can overcome such psychic attacks." sentence, should be accordingly updated and improved. Also, such documentation should link to the "Multi-step authentication and multi-factor authentication" subsection.

     MarkJFernandes (discusscontribs) 08:46, 2 June 2020 (UTC)


If fake keys are added to paper-based keyboard scrambler, use of such fake keys can obfuscate the password capture that mind-reading spies might try?

[edit source]

Essentially, paper keyboard scrambler is extended to include keys outside the keyboard area. When user presses on such keys, nothing is input into the keyboard. Then the user can potentially use such fake key presses, during the typing of their password, to obfuscate the password they think about in their mind.

If password length is 10, alphabet size is 26, and number of fake key presses is 5, then perhaps the password is obfuscated by up to the following factor when attacker knows password length (where factor relates to number of sub-strings attacker will need to try using brute-force method):

    

Perhaps add these ideas to the "Protection using password encryption > Without technology" subsection?


     MarkJFernandes (discusscontribs) 10:28, 2 June 2020 (UTC)


ADD use of deep-fake-resistant media for communicating public keys → §"Non-compromised communication of public keys"❓

[edit source]

Deep-fake-resistant videos (and other media) can perhaps be used to communicate public keys, such that it is difficult to fake such media.

Ideas for deep-fake-resistant technology:

  • Use 3D videos that are perhaps harder to fake.
  • Use camera angle that rotates round.
  • Use famous head of company to communicate key (face that everyone knows).
  • Shoot video in famous location, such as in the centre of New York.
  • Use high quality resolution and audio. The higher the quality, perhaps the easier it is to detect whether it has been faked or not.
  • Use holograms in video, that are difficult to fake.

     MarkJFernandes (discusscontribs) 11:47, 2 June 2020 (UTC)


Further ideas regarding password security

[edit source]

I'm moving closer to the overhaul of my security credential system. Have been using the paper-based keyboard scrambler for a few months now, creating a new scrambler/cipher with a week between different ciphers, or with a longer lapse between ciphers (such as when other activities have taken higher priority). It appears to be effective as well as generally good to use—seems like sometimes low-tech solutions are the best. Am thinking of using the Google password manager/vault (that comes as standard with Google accounts), to store all of my passwords. I would then also use it to create random strong passwords for all my accounts. Moving the browser window such that suggested strong passwords don't appear on the VDU, seems to be good and to work well. Might make a video about it because it seems to be a really good idea: you overcome ppl spying over your shoulder, ppl spying with hidden cameras, and also spying via interception of VDU electronic signals.

With such a planned set-up, there would mostly only then be one password to remember: the master password for the Google password manager/vault. Some more thoughts about keyed-in passwords:

  • When using a paper-based scrambler/cipher, it doesn't seem so necessary to add "salt" to memorable words or phrases making-up what the user consciously selects in the keying-in of the password; more generally, what the user remembers as the password before scrambling, doesn't need to be difficult to crack. The scrambler/cipher would seem to add the necessary "salt" to what the human person remembers, to make the actual "computer-keyboard-presses password", hard to crack. The user could then perhaps just remember a certain easy-to-remember natural-language phrase to input into the scrambler/cipher.
  • After the user uses the scrambler/cipher, the user can add to the password without the scrambler/cipher installed, for the better defeating of mind-reading password-capture attacks. As already touched upon in §⟪Protection by thinking little⟫, passwords that aren't thought about much during the keying-in, perhaps because of very good rote memorisation, can perhaps defeat mind-reading attacks. The second part of the password, that the user enters without the scrambler installed, can be this other kind of rote-memory password that is keyed-in with very fast key presses. There would then be a two-pronged defence against mind-reading capture of the password: the defence formed by the first part of the password entered using the paper-based scrambler, and the defence formed by the second part of the password entered using rote-memory fast keyboard entry.
  • It may also be a good idea to use copy-and-paste functionality of text never displayed on the VDU (perhaps by using white-on-white text), to add a third part to the password. Even though cryptographic security USB tokens can be used for similar effect, it might be that somehow, the security added by such tokens is defeated perhaps due to there being certain surveillance deliberately monitoring such technology. Copy-and-paste functionality might overcome such monitoring, because adversaries might not be considering the copy-and-paste functionality so much. Copy-and-paste functionality can overcome key-loggers, because recording the key presses associated with using the functionality does not reveal the actual pasted text.

The Google password manager/vault master password in such a set-up would be a particular point of attack. Fortunately, the free security generally available with Google accounts, seems to be good. Things like security alerts concerning unusual logins to your account, help to keep security at a high level. The master password would have to be changed on a regular basis to ensure security. I may just end-up doing it once per week, in tandem with creating a new paper-based keyboard scrambler/cipher each week.

By only needing to remember one master password, and because that password would be used each working day, the password memorisation work would hopefully be much less. In contrast, several passwords that are each used on average just once per month, might take significantly more resources for memorisation, both for all of them together, as well as for any one of them. Practice makes perfect, and repetition each day helps with memorisation.

     MarkJFernandes (discusscontribs) 16:38, 23 October 2020 (UTC)


Perhaps add info under § ⟪Digital cryptography: security certificates, keys & tokens⟫, about the requirement to keep public-key security certificates up-to-date?

[edit source]

Maintaining an up-to-date cache of public-key security certificates is likely important when only using a static historic copy of an OS—perhaps on a live DVD—in order that the security of internet communications be maintained. To this end, downloading the latest certificates to some secure storage in such circumstances, perhaps each day, or each week, probably would be a good idea. It may be no use updating the cache just when it is detected that internet communications have become compromised, because downloading the latest certificates with compromised certificates, is open to MITM attack where the man in the middle can keep modifying your downloads so that you only download bogus certificates. When your OS is installed to a rewritable system disk instead (rather than being a static version), the OS will likely apply security updates of its own accord, to make sure its cache of certificates is kept sufficiently up-to-date.

     MarkJFernandes (discusscontribs) 16:43, 3 November 2020 (UTC)


Are collision attacks a serious threat to the security of digital signatures (such as those used for signing firmware)?

[edit source]

The info in the table under the section "Cryptanalysis and validation" of the Wikipedia article on SHA-2, appears to indicate that they might be.

" ... Definitely not formally a security researcher (by any means), but just was wondering whether due to the signatures likely being relatively short, collision attacks (based on hash collisions, where two different messages can produce the same signature? [see https://en.wikipedia.org/wiki/Collision_attack]) could be used to sign rogue firmware components. You could have a small malware program, then bulk it up with "dead" redundant code (doesn't get used) in such fashion that you manage to get it matched to some valid signature. See https://en.wikipedia.org/wiki/Flame_(malware) . ..." - https://www.raspberrypi.org/forums/viewtopic.php?f=41&t=286049&start=50#p1737381

     MarkJFernandes (discusscontribs) 17:54, 30 November 2020 (UTC)

«Wireless Communications» chapter   (chapter 3)

[edit | edit source]

«Wireless Communications» chapter   (chapter 3)

Add information about using Faraday cages/shields and/or aluminium foil as shields to shield unwanted wireless communications?

[edit source]

I suggest that such information on Faraday cages/shields be added to this chapter as well as to the "What to do when you discover your computer has been hacked?" chapter. See here for information about this.

Wrapping mobile phones and smart-cards, that have RFID technology, in aluminium foil, is apparently a way to prevent certain kinds of tracking that could constitute security compromises. Such information can also perhaps be added to this chapter. See also the note here written in respect of such methods for the storage of computing devices.

     MarkJFernandes (discusscontribs) 15:11, 5 June 2020 (UTC)


VPN over free WiFi may be a good idea sometimes? Re. §"Shared WiFi"

[edit source]

Whilst Essex police's advice was not to use free WiFi for anything you wouldn't want a stranger to see, if you use a VPN (virtual private network) for your internet access over free WiFi, there probably is nothing to worry about, apart from people knowing that you are using a VPN. It seems that the VPN would conceal to people with security access to the free WiFi, exactly what you would be doing, apart from the fact that you were using a VPN. In certain circumstances, using such free WiFi in such ways, may be preferred if adversaries are targeting you based on the internet connection easily associated with your name (whether that be at your office, at your home address, tied to your mobile phone number, or otherwise in easy association with your name).

     MarkJFernandes (discusscontribs) 08:38, 21 July 2020 (UTC)


Having intermediate device for internet connection might be more secure?

[edit source]

Rather than your main computing device directly connecting to the internet, perhaps using smartphone tethering (for internet connection) or similar, might induce greater security. To improve security potentially even more, all communications tech (such as WiFi+Bluetooth cards) can be physically removed from your main computing device. Such greater security at least partly works on the principle of isolating the hardware used for your communications. When instead the hardware is within your main device and "known" to the other system components, malware in all the different firmware and dis(c|k)s, as well as "maltech" in the hardware, can potentially "piggy-back" over the communications tech to cause high damage to your computing—the attack surface is effectively larger, and the potential of such attacks is also much higher because of the integrated nature of computer systems. For perhaps even more security, use the intermediate device for downloading files that are then simply copied over using the OS's standard file-system copy operations. Such is probably more secure, but may still be open to attack if the main device has malware able to interfere with such copy operations.

A USB dongle for WiFi or mobile broadband, can count as such an intermediate device. Increased security is perhaps attained because potential damage caused by malware over the USB interface, is perhaps much less than the damage caused by a wireless communications PCI card over the PCI interface, that is 'within' the computer architecture (wireless tech embedded in SoC tech. may be even worse). Additional security can perhaps be attained by configuring the ordinary—that is non-firmware—software and drivers used for USB communication, to be more safe than usual (to perhaps act a bit like a firewall). USB is not the only alternative interface, and there may be other alternative communications interfaces that provide even greater security.

Such an intermediate device is similar to a hardware firewall, as well as a proxy server. If a trusted smartphone can be set-up to mimic the functionality of a hardware firewall and/or proxy server, then using such a smartphone as the intermediate device for a main computer's internet connection, could provide very good security. If the smartphone were set-up as a proxy server, this would perhaps provide strong audit functionality if the server were able to read the HTTPS traffic streams unencrypted (configuring such "spying" capability seems to be possible, see here). Such auditing could also be coded so as to "quarantine" any communication detected as suspicious, until human intervention provided the go-ahead to let such communication out of "quarantine" and onwards to its destination, a bit like how antivirus software works. One potential weakness in such a system, might be that all the TLS (Transport Layer Security which uses cryptography-based security certificates) security for the user, might occur on the main computing device without any double-checking of its correctness. This could mean, for example, that bogus TLS certificates might be deceptively used by malware on the main device for certain communications, that are then open to MITM attacks. To mitigate against this, the intermediate device ought to perform all the TLS functionality for communication from and to the main device (a bit like how the Nitrokey product works?), or the intermediate device ought to confirm that all cryptography operations either side of it were legitimate (i.e. that bogus security certificates were not used, etc.) Some brief research about such a set-up, has indicated that probably there's no technology product out there to do these things, so perhaps this might end-up being something of a new invention.

Interestingly, if on the other hand, your main device is trusted but the smartphone or dongle used for supplying the internet is not trusted, you can piggy-back over the security of the TLS encryption system (by, for example, only using the internet over HTTPS connections), to safely use the internet. Such piggy-backing relies on the aspect that encryption can be leveraged so as to safely use potentially-compromised equipment. See the "Dealing with the situation where you want to work with potentially security-compromised equipment/software" note for more about this.

While these ideas particularly resonate with respect to providing an alternative to built-in wireless internet connections, they do also count to some extent to provision of alternatives for built-in wired internet connections. In this regard, I'm not so sure whether this chapter on wireless communications, is the right place for these ideas.

     MarkJFernandes (discusscontribs) 11:11, 3 November 2020 (UTC)

«Digital storage» chapter   (chapter 4)

[edit | edit source]

«Digital storage» chapter   (chapter 4)

Add umbrella parent section for the subsections that are each dedicated to a particular pair-wise comparison between storage types?

[edit source]

The subsections that each compare one type versus another, can probably be usefully placed under an umbrella parent section labelled something like 'Various pair-wise comparisons that compare different storage types (comparisons in the form of type A vs. type B)'.

MarkJFernandes (discusscontribs) 13:41, 24 April 2020 (UTC)


Generalise name of §"Magnetic storage: tapes vs. discs"

[edit source]

Whilst tape storage appears to be mostly only available in magnetic-tape form, it makes sense to generalise the title of "Magnetic storage: tapes vs. discs" section so that the section is about a comparison between tape and disc storage in general (irrespective of the particular implementation).

MarkJFernandes (discusscontribs) 14:05, 4 May 2020 (UTC)


Add section called ' "Conspicuous markings on paper" based storage vs. "Non-conspicuous-data" based storage?

[edit source]

Such a section would perhaps focus on the aspect of being able to perform naked-eye visual inspections to confirm the correctness of the data. Andrew Huang (as mentioned elsewhere in this Wikibooks book) suggests that such visual inspection is beneficial for computer hardware: he specifically suggests using transparent materials for things like keyboards, so that users can easily confirm that the hardware internally looks as it should.

Being able to do such inspections is useful because if such inspections are not possible, you are likely going to have to rely on hardware to confirm data correctness, and that introduces, with respect to the relied-upon hardware, the possibility of firmware malware as well as of hardware-based tampering.

Examples of "Conspicuous markings on paper" based storage: QR codes; punched tape; paper-based ticker tape.

If adding such a section, it probably would be a good idea to link to the Wikipedia "Paper key" page.

Such "Conspicuous markings on paper" based storage might be good for BIOS/UEFI startup code.

MarkJFernandes (discusscontribs) 09:45, 18 May 2020 (UTC)


Add info contrasting recoverable file deletion with data-sanitisation file deletion?

[edit source]

The topic of 'file deletion' in respect of whether it be recoverable deletion or data-sanitisation deletion, is currently not covered in this chapter. It probably should be covered, as it does concern end-user computer security.

MarkJFernandes (discusscontribs) 07:57, 16 May 2020 (UTC)

Add information about storing data in volatile RAM not powered separately?

[edit source]

Storing data in volatile RAM not powered separately, may offer certain security advantages, in terms of not leaving traces after the computer has been powered-off. However, in "State considered harmful - A proposal for a stateless laptop" by Joanna Rutkowska, it is indicated that residual data can remain in powered-off DRAM for a long time; to blank such DRAM, a secure wiping (possibly zeroing) procedure could be performed on the DRAM (the paper mentions that short-circuiting pins on DRAM might work to do such blanking, short-circuiting could be quicker and more energy efficient). Mentioning these things might be worthwhile, if adding info about digital storage in volatile RAM not powered separately. Such RAM can be used for holding software including the OS(also see note about Puppy Linux and "Secure computing using Raspberry Pi for business purposes" project proposal that proposed to keep OS in volatile RAM without separate power)[2]—if any malware infects such RAM, powering the computer off (including through rebooting), should effectively get rid of it. If you could solely use such RAM for a particular OS, it would likely be very fast, and also would perhaps preempt the need for the isolated virtualisation security model offered by Qubes OS in certain circumstances, because the OS would effectively be sandboxed. Perhaps this was part of the thinking behind the early computer models that required loading software from things like cassette tape to volatile RAM not separately powered?

Storing data in such RAM can have additional advantages if used in conjunction with using sleep mode during periods when the computer is not being used—see here for more about this.

     MarkJFernandes (discusscontribs) 18:13, 26 October 2020 (UTC)


Do SD cards suffer a security weakness with regard to potential clandestine embedded wireless technology?

[edit source]

SD cards can have WiFi tech in them, as described in the §⟪About using a Wi-Fi enabled SD card⟫ under the §⟪Regarding how to obtain software⟫ (in "Software based" chapter). Could this be a security weakness of SD cards? Perhaps devices that instead have their chips visible or at least easily open to visual inspection, are then better?

     MarkJFernandes (discusscontribs) 14:00, 21 September 2020 (UTC)


Storage and retrieval without needing to use any firmware

[edit source]

The use of firmware used specifically for a particular storage medium, is a point of weakness, and attack point, in respect of malware potentially being hidden in the firmware. This is the case with modern mass storage devices such as SD cards, memory sticks, SSD drives, etc. It is also the case with older storage technologies such as CD drives, DVD drives, hard disks, and probably also floppy disk drives (or at least probably all the brand new floppy disk drives that are nowadays available.)

By using a standard cassette tape player, maybe a walkman, perhaps an old one, data can be stored on magnetic-tape cassette tapes (like was done several decades ago for computers like the Spectrum 48k [back in the 80s]). Probably such tape players have no firmware. But even if they do have firmware, the re-purposing of the technology from being music technology to being data-storage technology, probably would overcome various security threats as adversaries would likely not consider developing malware for such tech and also would likely find it much harder to do any such development (see §⟪DIY security principle⟫ in the ⟪Broad Security Principles⟫ chapter for more about re-purposing as a broad security principle). Reusing an old tape player (maybe a walkman) that you have lying around, could be one way to ensure better that adversaries have not tampered with the tape player you use. The tape player can simply play audio into the mic-in of the computer, and receive audio through the headphone socket. The OS would provide the software for dealing with the data storage medium, and this is likely desirable as OS installations are often not so difficult to deal with in terms of malware removal and detection, as firmware is (such installations often being on "exposed" and large SSDs or HDDs). There is a chance that firmware in the sound card (or SoC for the sound card?), could pose a point of weakness in terms of potentially harbouring malware, but overall, if the tape player has no firmware, the number of points of weakness/attack should be smaller in respect of the number of firmware chips. As just outlined, there are also other reasons why such tech would likely be potentially more secure.

How much data can be stored on such tapes might be quite limiting. Perhaps such tapes might then be used for things like firmware code, maybe firmware backups.

     MarkJFernandes (discusscontribs) 11:41, 26 September 2020 (UTC)


Researching better than normal SD cards, or way to interface with SD-card slots to overcome SD-card vulnerabilities

[edit source]

So far, my researching these issues hasn't really found much out there to overcome the vulnerabilities posed by using 'SD card' slots. The only thing that I've found that might be promising is the 'TE0747 - MicroVault' open-source-hardware product. It could be that this product might be more 'trustable' than most products out there, just because it is based on open-source technology, and also because it might be produced in Germany. But what is more interesting, is that it may be readily possible to wipe the SD card's microcontroller firmware data, and reinstall a fresh firmware image. If possible, it could be one way to be able to make sure no malware is in the SD card's microcontroller firmware, which if present, is a serious concern.

Surprising that I have not been able to find any other solution.... Maybe there is a cover-up?

     MarkJFernandes (discusscontribs) 14:55, 9 October 2020 (UTC)


Cryptography can be leveraged to enable the safe use of potentially unsafe digital storage

[edit source]

See the "Dealing with the situation where you want to work with potentially security-compromised equipment" note for more about this; the paragraph on how to leverage cryptography technology, is the relevant one.

     MarkJFernandes (discusscontribs) 13:05, 19 October 2020 (UTC)


Digital-storage security through multiple copies of data

[edit source]

There appears to be a general digital-storage security principle, whereby greater security can be induced simply through keeping multiple copies of data. If such multiple copies are kept physically isolated from each other, perhaps some as backups where they may be stored miles away from the user's location, then even more security can perhaps be attained. The security is simply based on the fact that the likelihood of adversaries "messing" with every single copy of some data, is generally less than if no copies of the data were kept. The same principle is in play when users make backups of their data, although the threat in such cases may be less from adversaries and more from system failures. Whilst physical isolation can improve security, simply keeping multiple copies of your data on your own computer system may be beneficial. Being able to make multiple copies is dependent on such being affordable, and so making multiple copies using cheap DVDs/CDs and/or cheap cloud storage, may be the way to go for some people. Such security also requires that the copies be able to be checked for sameness, as an integral part of the security; fortunately, computing technology is quite able to do this.

This idea of keeping multiple copies of data, was touched upon in respect of keeping multiple copies of a live system DVD, in a Raspberry Pi project attempting to establish a secure computing environment for business purposes—see here for more about the project.

The note "How to compare live OS discs obtained using multiple channels, when you have no trusted OS...." is related to this note.


     MarkJFernandes (discusscontribs) 15:42, 29 October 2020 (UTC)


Can perhaps generalise §⟪Rewritable media vs optical ROM discs⟫ a little?

[edit source]

Whilst §⟪Rewritable media vs optical ROM discs⟫ probably is quite worthwhile, there is no mention on the page of other kinds of "OTP" (one-time programmable) storage: for example, the OTP microchip tech sometimes designated for portions of some firmware, is not mentioned. Deliberately using OTP tech. in a security-related way, may be a good idea for security. See https://www.raspberrypi.org/forums/viewtopic.php?f=41&t=286049&p=1731799#p1731432 for some info on how OTP tech can be leveraged for security. It might be possible to create OTP tech through the use of a USB security token, and non-ROM memory— see this Rasp Pi Forums post for particular info on this.

     MarkJFernandes (discusscontribs) 16:47, 30 November 2020 (UTC)


Talk about one-way storage?

[edit source]

One-way storage, such as that available with certain USB security tokens, might be a broad category of storage that could do with particular treatment in this chapter. Alternatively, it might be worth mentioning it in the "Passwords and digital keys" chapter, as its primary security advantage seems to be perhaps just in respect of the storage of private keys used for asymmetric-key cryptography. It is mentioned in passing in the Raspberry Pi Forums post here, as well in the Rasp Pi documentation in respect of making certain customer OTP (one-time programmable) bits unreadable here.

     MarkJFernandes (discusscontribs) 17:16, 30 November 2020 (UTC)

«Some measures that are primarily physical» chapter   (chapter 5)

[edit | edit source]

«Some measures that are primarily physical» chapter   (chapter 5)

Metal boxes or containers (such as briefcases) lined with metal foil, have the potential of shielding malware RF communications

[edit source]

After performing some experiments, completely wrapping a mobile phone in metal foil does indeed appear to block mobile-network, WiFi, and Bluetooth communications to and from a mobile phone. However, if there are some slight gaps in the wrapping, there is the chance that the blocking might fail. Anyway, with such principles certain metal boxes might be automatically capable of performing such shielding. For metal boxes that have the gaps just mentioned, covering over those gaps with metal foil (such as cheap aluminium foil used for cooking), might provide sufficient shielding to make the whole box a shield for the contents of the box. Likewise, lining containers such as briefcases with metal foil has the potential of turning the containers into shielded containers.

Perhaps such information should be integrated into this chapter, perhaps under the "Metal boxes", "Padlock-able laptop bag", and "Combination lock briefcase" sections. If doing so, linking to the "Wireless Communications" chapter may be a good idea—see the note here.

     MarkJFernandes (discusscontribs) 15:08, 5 June 2020 (UTC)


When using planar materials, unrepeatable-pattern principle can be better if...

[edit source]

With respect to the "Perhaps the simplest and best idea" subsection of the §"Tamper-evident security-system ideas" under §"Exploiting unrepeatable patterns for tamper evidence" of this chapter 5 (labelled "Some measures that are primarily physical"), it can be better to use transparent or mesh materials, and then to layer such materials on top of each other. The non-opaqueness and the layering, induce unrepeatable patterns in security photographs taken, that have stronger unrepeatability. For example, why not layer several layers of translucent shell-suit material?

Additionally, if you use a material that makes use of a variety of colours that is on the material in a haphazard fashion, this can result in greater unrepeatability. If you further choose unusual colours, and maybe some metallic colours (like silver), shimmer colours, and/or iridescent colours (like the appearance of the underside of CDs), this can result in even greater unrepeatability. The use of holographic materials and/or irregularly textured/bumpy materials might result in even more unrepeatability. These things can perhaps be induced by the simple application of a variety of art paints in a particular way.

     MarkJFernandes (discusscontribs) 07:51, 16 July 2020 (UTC)


Unrepeatable-pattern principle can be better if a special type of photography is used....

[edit source]

Using X rays or T rays for the security photography recording the unrepeatable patterns can be better where patterns have a 3D aspect. An adversary may think that the only pattern to duplicate is the one that is visible. But if you use X rays or T rays, the recorded pattern may be at some depth that only you know, making it even harder for an adversary to repeat the material configuration such that the repeated configuration also repeats the same X ray or T ray patterns.

Also, even without a 3D aspect in the unrepeatability patterns, other electromagnetic photography can be used for potentially improved security. For example, could RF distortion induced by some unrepeatable-pattern configuration be captured as security images?

These ideas link to the ideas present in §"Measuring physical properties for authentication" in the "Broad security principles" chapter.

     MarkJFernandes (discusscontribs) 08:10, 16 July 2020 (UTC)


Potentially better terminology to use, and other potential improvements, for §⟪Specifically for goods in physical transit⟫

[edit source]

A web-page on the puri.sm website uses the term 'anti-interdiction' to describe measures aimed at preventing and/or detecting MITM (man-in-the-middle) attacks during the transit process. It might be worthwhile using this terminology in this section to align better to the terms used in the "secure computing" community. There is also potentially useful info at the web-page that perhaps can be added to this book, and in particular to this section.

     MarkJFernandes (discusscontribs) 08:05, 18 September 2020 (UTC)


"Unrepeatable-patterns security" research

[edit source]

Research around “main idea” sub-section.

[edit source]
  • In respect to the adding of shredded optical discs such that light-refraction distortions are added to the unrepeatable-pattern material, I found it to be not so effective, probably because refraction distortion/deformation wasn’t strong enough. Iridescence effect (“rainbow” effect) from shredded discs, was probably considered not significant enough in light of the effects obtained from the combination of transparent pieces, colour-tinted semi-transparent pieces, and reflective pieces.
  • Adding colour-tinted semi-transparent bits to the mix is likely important for increasing pattern complexity, and for making it harder to duplicate the pattern. If the bits can be shaped so as also to create visual deformations/distortions (like the refractive distortions of a glass sphere), then this would probably improve security even more. Have tried creating colour-tinted transparent bits using sugar and water heated up into a kind of gel that afterwards cools to become solid. Certainly it works in terms of creating such semi-transparent coloured bits, but such bits end-up being sticky, which is undesirable—it means that the security patterns tend not to be disturbed upon removing the secured item from the container (because the bits stick together). If the solution turns into caramel then the stickiness is not there, but then there is invariably a brown tint, which is undesirable from a “colour variety” point of view. Recipe used was approximately: mix together 2 level teaspoons of white sugar, a little paint (for the colour tinting), some water, and a small bit of oil; place in microwave for one minute, and then drop hot liquid on to an oiled surface to create several separate ‘islands’ of the liquid; leave to cool. To overcome final stickiness, I did try coating with varnish, and some other things, but mostly it didn’t seem to work. I thought that perhaps the sugar blobs were sticky on the outside due to the presence of water, so I tried placing them under the grill in an attempt to remove the water from the outside, but it didn’t work to remove the stickiness.
  • Crystals of sugar are not sticky, so how about trying out the making of coloured transparent sugar crystals? Could perhaps work but not sure how to make them. Are copper sulphate crystals made through electrolysis? Could that possibly indicate a path forward?
  • Shattered glass/plastic coated with coloured varnish might be cheaper and easier to make than trying to make colour-tinted sugar blobs. Could perhaps use old CD cases, bubble-wrap pieces, or old spectacles. CD cases seemed inappropriate probably because of its rigidity, and because the shapes produced tended to interlock rather than tumble; such characteristics probably meant that patterns were not easily destroyed upon taking out the secured item (such easy destruction is desirable). Broken glass/plastic from spectacles likewise would probably have the same problem. The friction between bubble-wrap pieces also tends to prevent the security patterns from easily being destroyed. Light pieces also have a tendency of not easily tumbling nor moving; probably the weight of rice grains means that the grains much more easily tumble and move around. Material that tumbles rather than material with friction, interlocking shapes, or a very light weight, is desirable. Rice appears to be such material but it is not transparent and any semi-transparency in it is both quite low and unyielding to being able to bear a variety of coloured tints.
  • Coloured jelly broken up into bits and pieces might work when the item being secured only needs to be secure for a short while, perhaps up to a couple of weeks. The wobbly and flexible nature of jelly means that patterns can perhaps be more unrepeatable and more easily destroyed. By mixing paint in with the jelly, different tints can be created, thereby increasing the pattern complexity and pattern unrepeatability. However, after having left some gelatine jelly solution for a long time, it smelt quite rank, so perhaps jelly would only work for a short while (as just mentioned). Ideally, the bits shouldn’t decay. In this regard, plastic that doesn’t degrade easily, which would otherwise be thrown out, would seem to be good material.
  • If some of the particles/pieces are metallic, and the submerged item is magnetic, then perhaps the removal of the submerged item would more likely result in disturbance of the unrepeatable patterns. Alternatively, static-electricity attraction can be used instead for similar effects.
  • A lubricant or teflon-like spray (maybe furniture-polish spray or WD40) could perhaps be used to make bits very slippery so that they do things like tumble very easily, ensuring that the patterns are destroyed more easily when trying to get to the secured item. Did try furniture polish but it didn’t work with paper-like bits. Think I may also have tried WD40 but again it probably didn’t work with such paper-like bits. If instead WD40 (or some other lubricant) were applied to rice, that might be more effective, that might make the rice tumble more easily.
  • Perhaps adding hair or string, multicoloured by use of differently coloured paints, could increase the pattern complexity and unrepeatability. I have managed to do this for string. String seems to retain its shape quite easily. It probably would be good if the shape of the material sprung into another one upon disturbance—maybe in this regard, fairly-stiff nylon or hair could be better.
  • Perhaps multicoloured rice would work well. But once again, because of the lack of transparency in rice, rice probably wouldn’t be so effective.
  • In order to add a coloured tint to transparent plastic wrapping-type material, paint can be used. However, such plastic that already has a tint may be preferable, as such plastics seem to be more transparent (certain “assorted chocolates” sweet wrappers work well, such boxes of chocolates may also furnish one with good reflective foil having a variety of coloured tints). Have been trying to add tints with paints, and am finding that often the transparency is impaired, probably because of the composition of the paints I am using—they’re not specifically designed for transparent effects I suppose, special paints for transparency would likely be better (maybe glass paints?) Did think about using diluted acrylic paint, built-up using several layers, but probably the transparency issue would still be there with such a method—the paints I am using are probably just not designed for creating transparency effects. Diluting with varnish or water has made painted areas lumpy and streaky, both when the paint has been acrylic paint, and when the paint has been oil paint. Diluting/thinning with white spirit (or turpentine) perhaps could help but I think the same transparency issue would still likely exist with such methods. I have tried diluting oil paints with oil but still can’t achieve the good colour-tinted semi-transparency effect I desire; such diluted oil paint also takes a long time to dry.
  • Perhaps precision in terms of camera positions and angles, is not needed in respect of the security-image photographs. Instead, a video that rotates its viewport all the way round the item, first in the longitudinal direction, and then secondly in the latitudinal direction, can possibly be used. Such videoing may also be a better way to make sure that the entire surface of the container is visually recorded. To effect such a video, rather than rotating the camera, one could possibly have a smartphone camera fixed on a table surface, pointing upwards, with the container rotating above the camera. Closing curtains and using the mobile phone’s flash would probably both be good things to do in respect of reflections from the unrepeatable-pattern material that would otherwise vary in relation to differences in the scene lighting.
  • A sealable transparent flexible pouch may be better for security as more pattern destruction would likely take place after taking out the secured item from such a container. Tinging the pouch with random burn marks, along with deforming it in random ways, might make the bag/pouch itself unrepeatable. If the bag can then be sealed shut (perhaps with glue) such that the bag itself constitutes an opened-evidence mechanism, that could provide even more security. Gluing the container shut can also result in extra security by the greater disturbance of the unrepeatable-pattern material inside the container. Such extra security comes about from the principle of there being greater pattern disturbance in the contents, during the opening of the container. Not all set-ups perhaps would have such a principle in play, and maybe special effort would be needed in the designing of the container to ensure such a principle would apply to the relevant set-up.
Media regarding test & research to see whether multicoloured vaseline patterns in the lining of a transparent plastic pouch, could be the basis for a tamper-evidence mechanismDetails of test to see whether multicoloured vaseline patterns in the lining of a transparent plastic pouch, could be the basis for a tamper-evidence mechanism
  • What if an adversary uses a glue to recreate the precise security patterns, and the glue has the property of evaporating without trace and without disturbance of the precisely recreated patterns? Perhaps using vaseline mixed with differently-coloured paints in the patterns would mitigate against such an attack. The inside surfaces of a transparent pouch could be lined randomly and unevenly with such multicoloured vaseline. After filling the pouch to the brim and beyond with bits, and then closing the pouch tight shut, the bits would disturb the random lining of the multicoloured vaseline. Upon retrieving the secured item, probably the lining patterns would be slightly destroyed, and furthermore, upon trying to reseal the pouch such that it looked like it had never been opened, the lining patterns would likely change even more. Tamper evidence would thereby be induced through such pattern changes in the multicoloured vaseline lining.
    I did consider whether toothpaste, purely paints on their own, or ultrasound gel mixed with paints, could be used instead of the vaseline-paints mixture just mentioned. Both toothpaste and paints on their own would likely dry out to become hard, making them unsuitable. Ultrasound gel with paint mixed-in works in the short term (over a couple of days) apart from the action of gravity over time, which can destroy the patterns. Vaseline mixed with paints seems to be the best and so far, from experiments, has seemed to have worked over the course of fourteen days (from 17.9.2020 till 30.9.2020)—the patterns seem to have not changed at all. At the end of the testing period, the vaseline was still soft and pliant, meaning that the security patterns formed by the vaseline could still easily be destroyed by disturbance—therefore it was still good for creating tamper evidence after disturbance.
Photo of pattern bits and pieces that tended to form patterns not readily destroyed enough in the opening of the security container
Photo of pattern bits and pieces that tended to
form patterns not readily destroyed enough in the
opening of the security container
  • After experimentation, I have found there is a problem when the security-pattern material is mostly coloured paper and the like, in that the patterns are not being adequately destroyed during the opening process. Having a ‘splurge’ effect, such that the contents ‘splurge’ out of the container upon opening, probably would improve things. Squeezing the unrepeatable-pattern material into the container, theoretically would beneficially work for creating such a ‘splurging’ effect. But I am having difficulty achieving this. Have tried squeezing (and compressing) pieces from flexible plastic material (such as plastic bags). They do indeed readily expand from a compressed state when the pieces are each on their own, but when you compress a whole bunch of pieces together the expansion rate is not so great—they resist expansion owing to the weight, and inertia of all the pieces together. The contents simply do not ‘splurge’ out; instead, they are just slowly expanding only a little. Adding compressed metal springs in the container could help to achieve the desired effect—a bit like a jack-in-the-box toy. I have tried using a small metal spring from an old pen for this, to improve the “splurging” effect. Closing the container with the spring adequately compressed and adequately positioned, seems to be difficult, and I found it difficult to improve things with the spring from the pen. More broadly, any material having springy (a kind of elasticity) attributes may help to achieve this mechanism. I have tried old kitchen sponges greatly compressed, but the “splurging” effect was just not much improved—still slow. If the container were stretchable and if when the item(s) were secured, the pouch became stretched, that could help facilitate the ‘splurging’ effect. Have tried gluing elastic bands to the inside of such a pouch in order to simulate such stretchability, but the super glue I used wouldn’t work for the sticking of such surfaces—using glue-gun glue might be better[3]. Also have tried using elastic bands around the outside of a see-through pouch, but that has also not improved things much either. Stretched elasticity that has a stronger strength to return to its unstretched state, could improve things, but I’m slightly unsure as to how to create such stretched elasticity in the container, such that the container remains stretched during the time items are secured in the container. Maybe compressed rubber (such as exists in a compressed rubber ball) and/or transparent balloons might be capable of creating such elasticity. Anyway, so far, any ‘splurging’ mechanism I’ve managed to foster, has simply not been strong enough.
    One of the other problems appears to be that the security-pattern bits I am using have too much friction (maybe they are even a little sticky), and are quite light in terms of weight. Probably largely because of this, the patterns are tending not to be destroyed as much as I would like. Heavier bits like raw rice grains, probably would be better. Rice additionally has a tumbling effect, which is desirable, and I suppose rice can be coated with a lubricant to accentuate the “tumblity” of the grains. Perhaps I could instead try coloured rice in a see-through bag that is rolled-up, where hopefully the unrolling process (which is perhaps needed to get to the secured item[s]) inevitably destroys the patterns to a sufficient-enough degree. But then rice is not much transparent, perhaps meaning resultant patterns aren’t as complex as is required for the security protocol to work sufficiently. Additionally, rolling-up the bag means the surface area making up the pattern images in the security photos is reduced, meaning the patterns are not as complex as they are with bags not rolled-up. Raw rice grains appear to be sometimes semi-transparent; perhaps extremely strong light shone through the grains could create a strong-enough transparency effect in the rice for complex-enough security patterns to be induced.
    At present, these problems have been significant enough to make me feel as though I have to ‘go back to the drawing board’ with my ideas—it is very much necessary that the unrepeatable security patterns are readily destroyed upon opening the container.
  • If securing just one item, it is better to split up the item into several parts (if possible), where each part is submerged separately in the unrepeatable-pattern material, with the parts spaced apart. This generally improves security because retrieving all the parts then causes more pattern disturbance. If doing such, boxing the parts with radiation-shielded boxes might be a good idea so that adversaries are not then able, through the use of non-invasive scanning equipment, to figure out easily which boxes contain parts vulnerable to tampering attacks. If an adversary were able to do such, they would then possibly be able to tamper with just the key part, and then place the part back into the container without disturbing much the rest of the pattern mix. If on the other hand, they weren’t able to do such, there would be an increased chance that they would have to open every single box, thus resulting in greater security pattern disturbance, which would therefore provide a generally better system of tamper evidence.
  • Did consider whether a glass of water with coloured oils on its surface, kept completely motionless, could be used for keeping a water-proofed memory stick submerged in it (as a tamper-evident mechanism). A person would photograph the surface patterns, and the photos generated would then be the security-pattern photos. However, when I tried using oil paints for the oils on the surface, it didn’t work—the patterns changed regardless of disturbance to the liquid. Also tried candle wax, but that too didn’t work (probably also tried wax mixed with oil, with the same unsuccessful outcome). Instead of coloured oils, I tried pencil-shaving fragments (probably on top of a film of oil on the water’s surface) but that too didn’t appear to work. A suspension of coloured fragments in a transparent liquid could work well but then the fragments would have to be motionless. How about the oil salad dressing bottles that I think you might see in restaurants? Do they have bits suspended in the oil? Could that perhaps be used? Skin on milk could perhaps be used, if it is easily destroyed when trying to get to the contents. If skin could be created for all inside sides of a transparent container, that could be really good.

Research around “Perhaps the simplest and best idea” sub-section.

[edit source]

After experimenting with crumpling plastic bags, I have realised that such material is too shape retentive—the crumples tend not to be destroyed easily. The same may well be true also of shell-suit material. In light of such undesirable properties, I have instead employed the use of silk and silk-like fabrics for such unrepeatable-patterns security. So far, over the course of weeks, it is proving to be effective. One issue is that the fabrics can easily move if they are not secured in place—I have managed to secure them successfully in place, by tucking the fabrics under and between items in the secured area, such that they are pinned, in some fashion, in place. For making sure the patterns haven't changed, I take two photos each with the flash on. The first photo is the historic record of how the patterns should be, and the second is a "latest" photo which enables a check to be made on whether the patterns have changed since the time of the first (historic) photo. By simply switching back and forth between these two photos on my digital camera (which could also be done on a conventional computer), I am able to detect easily, just through standard human vision, whether the patterns are enough the same to conclude that no intrusion into the secured area has occurred. The use of the flash means that I mostly don't need to worry about other light sources disturbing the security images. Using reflective un-patterned silk might be better than other materials: with the flash on, photography can capture a kind of topographical mapping of the subtle "hills and valleys' in the reflective un-patterned silk, which can make the security patterns stronger.

I speculate that using a clear plastic/glass sheet to go in front of, or around a secured area where this kind of unrepeatable-patterns security is employed, could be a good idea. Its purpose would be twofold: 1) it could stop people from accidentally disturbing the security patterns (especially if the sheet is screwed-in place); 2) if fabrics are pinned just by the force of the sheet, somewhat haphazardly behind the sheet, this may result in stronger security patterns that are more easily disturbed when getting to the secured area (such getting would involve the removal of the plastic/glass sheet, and so probably would also cause the 'tumbling down' of the fabrics—with related security-patterns destruction— to some extent). Such an idea could be applied to a briefcase or similar container, where the container is made partly transparent or a transparent container is specifically acquired. It could also perhaps be applied to glass display cabinets.

Research around new ideas not yet in the book.

[edit source]
  • The electrical resistances measured between random point pairs could perhaps provide configuration information that would provide an alternative to the use of security-pattern images (the main method that has been researched thus far). The unrepeatable-pattern material would then be made up of bits and pieces of electrical conductors, perhaps of materials of varied resistances, and perhaps together with some electrically-insulating bits and pieces. Such security could be much stronger than the application of the unrepeatable-pattern principle to visual images (such application being what has so far been investigated).
Photo of transparent plastic pouch filled mostly with transparent or semi-transparent pieces of fabric-like plastic material, with light being shone from the underside into the pouch—looks like an X ray, doesn’t it?
Photo of transparent plastic pouch filled mostly with
transparent or semi-transparent pieces of fabric-like plastic material,
with light being shone from the underside into the pouch
—looks like an X ray, doesn’t it?
  • Somewhat similar to the measuring of electrical resistances, is shining a light through a transparent container and capturing the image created on the other side, which could perhaps simply be the shadow cast. Such images could provide strong security, and could even be implemented in addition to implementing the conventional image security so far discussed in this research, with both implementations in operation for the same security container.
  • Rather than the unrepeatable-pattern material filling all the empty space in the security container, I have developed a new idea that involves lining an inflated balloon with unrepeatable patterns. A transparent balloon ought to be used, so that the lining patterns are visible from the outside when the balloon is inflated. The lining patterns can be formed in the same way as already mentioned for the creation of lining patterns for certain other situations, by creating multi-coloured vaseline (using paints mixed into vaseline), and then creating random patterns on the inside lining of the balloon before it gets inflated. To make the application of the multi-coloured vaseline easier, you might be able to turn the uninflated balloon inside out, apply the vaseline, and then turn it back to its normal form where it is no longer inside out. The item to be secured would need to be placed inside the balloon, probably before inflating—might need to stretch the balloon opening to get the item into the balloon. Once the balloon has been inflated, hopefully the multicoloured-vaseline patterns (the security patterns) would be formed on the inside lining of the balloon, and become photographable. The balloon would have to be sealed in such a manner that any possible air leakage would not unduly destroy the patterns during the time the item was secured. For such sealing, you could perhaps use some well-suited glue (maybe glue-gun glue[3]). With such a set-up, it probably would be very hard not to destroy unnoticeably the balloon-lining security patterns in the process of performing any tampering to the secured item (inside the inflated balloon). Trying to recreate deceptively the security patterns in the same set-up, would also likely be hard. Therefore, such a set-up perhaps would constitute quite an effective tamper-evident system. Instead of using multi-coloured vaseline, multi-coloured hot wax could perhaps be used, where the wax would be poured before inflating, and then the balloon would be quickly inflated shortly after such pouring; the wax would cool to form the security patterns on the inside lining of the inflated balloon. Alternatively, PVC glue mixed with different paint colours could be used to line the balloon; once the PVC glue dries on the inflated balloon, there would then hopefully be a coloured pattern lining, that upon deflation would be destroyed. Confetti balloons, or the mechanism behind such balloons, where the confetti is "activated" so that it sticks to the lining of the balloon, might also work; after inflating such balloons, it appears creating an electrostatic charge on the balloons by rubbing them on some wool (maybe also on your head might work), makes the confetti stick on the inside of the balloon; such patterns might be hard to replicate, and also easily destroyed through a small change in the volume of air in the balloon. Recent experiments have indicated certain types of torn-up plastic wrapping material (like from plastic bags), as well as hair, are capable of sticking—like the confetti of confetti balloons—to the inner balloon lining using electrostatic attraction, for several days, in fixed unchanging positions, regardless of the natural gravitational attractions that would otherwise move them. Figuring out the plastic materials amenable to such electrostatic "position fixing" might not be so simple, so am first going to experiment more with just human hair (could be cheap source of material... just cut your own hair?)
    Upon reflection, if applying multicoloured vaseline patterns to the inner lining of a transparent balloon, there doesn't seem to be much of a reason not to also fill the empty space in the balloon with other "conventional" unrepeatable-pattern material. Such would seem to increase generally the security deriving from unrepeatable patterns in the set-up. If doing such, it might be a good idea not to fill the balloon to the brim with such material, so that there is likely greater "fluidity" in the stuffing (in contrast to if it were ram-packed with the material), and so greater ease for security pattern destruction as part of the mechanism for detecting tampering. However, if not enough material is put in the balloon, then the patterns could be destroyed regardless of tampering, so need to make sure not too little material is added. Something not addressed thus far, is how to get the item to be secured, into the balloon. For small items, such as certain memory sticks, such items may fit through the 'blowing-up' nozzle relatively easily. But for larger items, it may not be possible to insert them into the balloon in the same way. One possible option perhaps is to slit the balloon open at the bulbous part of the balloon, insert the item to be secured into the balloon, and then glue the slit closed such that the balloon can then still be blown-up in the conventional way—such would be a bit like performing a surgical operation on the balloon. The kind of glue used would probably be important—perhaps glue-gun glue would work for this[3]. Another thing to bear in mind, is that if the balloon is blown-up so that it is very close to its maximum capacity, this might provide greater tamper evidence due to it being very sensitive to bursting (such bursting perhaps perceived as causing 'mayhem' of sorts in terms of destroying the security patterns).
    Instead of using inflated balloons, inflated bubble-gum bubbles could perhaps be used to improve security. Once such a bubble is burst, blowing-up a bubble with the same patterns as the burst bubble, is likely practically impossible. Not sure how such bubble-gum bubbles would actually be practically effected, as such bubbles would likely tend to be quite delicate—perhaps requires greater thought and research in order to make such a mechanism practically possible.
    It seems that opaque balloons can sometimes be used, just by shining a strong torch light (such as maybe a smartphone torch light) through an inflated balloon in order to reveal the inner-lining security patterns. The patterns must be capable of being revealed in this way, and so the substance used for making the patterns, may need to be partially transparent. Also, the balloon not being empty, may undermine the security if it be the case that the torch light can't get sufficiently through the balloon. Being able to use "ordinary" opaque balloons that are likely more common, may be especially good for budgetary reasons. Could also be good for security, as the patterns aren't normally compromised to adversaries if they simply pass by—the patterns are not visible to casual passers-by, a strong torch light is required to make the patterns visible.
  • The unrepeatable-pattern material could be formed from multi-coloured iron filings in water. The transparent container would be wrapped completely with transparent magnets. The multicoloured iron filings would then be attracted to the panels of the container in a random way. Once the filings become settled into their ongoing fixed positions stuck to the outer walls of the container, photos of the outside of the container would then be able to be taken, such photos being of the security patterns visible from the outside of the container. To get to the container’s contents, hopefully the outside magnets would have to be moved, and that would hopefully noticeably destroy the security patterns. This security method is perhaps prone to the attack where both the iron filings are glued to the container to repeat a particular pattern, and the glue used ends-up dissolving in the water to leave no trace that glue was ever used. On another note, a water-cooling PC case that is transparent, might be able to be adapted such that it constitutes the basis of the security container of this security method.
  • Electrostatic or magnetic attraction could perhaps be used in conjunction with some suitably-fine material, such as perhaps hair in the way it is affected by electrostatic charges. But gluing the material, in such a way that the glue evaporates, might be capable of defeating such security. If the electrostatic or magnetic charges could be randomly distributed, and then somehow set-up to be lost upon opening the container, then maybe that would be able to overcome such attacks (since the specific charge distribution would perhaps then not be able to be reused or recreated for any hoax). Cling film might be material easily capable of having randomly-distributed electrostatic charges (the distribution may even be "naturally" created). You could, for example, instead of draping storage boxes with cloth, drape them with cling film having short cut strands of hair randomly positioned on the underside of the cling film (other underside materials that might also be used are the material formed by small pieces of plastic-bag material, dust, and talcum powder [perhaps multicoloured using paints or otherwise], when the material is capable of electrostatic attraction [in the case of dust and talcum powder, good security might also be possible without electrostatic attraction]). The hair material would hopefully stay in position due to the electrostatic charges in the cling film, and also hopefully be easily moved by slight physical disturbances such as those perhaps made by intruders. Adversaries wouldn't be able to spray a conventional kind of glue to keep the hairs in place, because the hairs would be on the underside of the cling film (where any spray cans or like dispensers should find it hard to reach). The physical shape of the cling film, perhaps slightly crumpled, maybe also slightly warped, would also lend to making it hard to deceptively recreate the same security patterns—the cling-film shape, in addition to the hair arrangement, would also make-up the security patterns. It might be that the randomly-distributed charge in the cling film is easily modified through slight physical disturbances; if this be the case, such property would make this method even more secure.
  • How about having a memory stick wrapped-in cellophane left in a transparent container of water, where moss or algae is allowed to grow all over it. Perhaps taking the stick out would invariably noticeably destroy the moss/algae patterns such that they could not unnoticeably be repaired. Reproducing the moss/algae patterns would perhaps be practically impossible because of the way the moss/algae grows. Perhaps the only problem would be how to freeze the moss/algae patterns such that they stopped changing. Could a microscope perhaps be used to make sure the patterns hadn’t undergone tampering?
  • Perhaps the container could be some kind of irregularly-shaped electrical capacitor storing charge. Some random action could somehow disturb the charge into an unrepeatable pattern. A “photograph” of the charge could then be taken as the security photo (maybe of the EM fields?) Opening the container invariably disrupts the charge and consequently also the security image. The random nature of the pattern perhaps means that it is hard to reproduce artificially. Cling film might be good for keeping a randomly-distributed electrical charge (it may even be naturally able to have such distribution). Such distribution might also be capable of being easily lost.
  • Can cracks in transparent glass be repaired such that it is unnoticeable that the cracks were ever there? If not, then maybe glass could be used somehow. Encasing in molten glass might be capable of providing a tamper-evident solution. Instead of glass, perhaps plastic could be used, such as perhaps acrylic plastic. Transparent acrylic can be bent by using standard heating equipment meant for such bending; such heating equipment appears likely to be inexpensive as well as easy to use; according to https://plasticsheetsshop.co.uk/how-to-bend-perspex/, acrylic can be bent using a hairdryer, an adjustable paint burner or a convection oven (if you don't have the standard equipment). It may be possible to use plastics other than acrylic, that are more readily available. For example, it might be possible to recycle the transparent or semi-transparent plastic found in old CD/DVD cases (am assuming that such plastic is not classed as acrylic) for this security mechanism. The Wikihow "How to Melt Plastic" article indicates that household plastic can sometimes be melted in a kitchen oven, with a heat gun, or with household nail polish remover (acetone). Bending acrylic completely around the item to be secured, both in the latitudinal and longitudinal directions, might work. Rather than using completely transparent acrylic plastic, if “fleck glitter” acrylic plastic were instead used, it would perhaps be then capable of providing unrepeatable patterns without requiring any extra work (if it were the case that the “fleck glitter” patterns were of a random nature in the acrylic). "Fleck glitter" acrylic plastic appears to be currently relatively expensive; instead, it would seem just painting the inner sides with glitter nail-varnish polish should work; brief research seems to indicate that unbending the plastic through heat would likely melt the "nail-varnish polish" patterns, and therefore this would seem to add to the tamper-evidence mechanism. But then how to prevent the bent plastic from being unbent, and then bent back again, in an unnoticeable way? Perhaps painting the bent acrylic with some material that noticeably changes when heated-up could overcome such an attack (glitter nail-varnish polish might work). Then any heater used to unbend the material would destroy such patterns. Perhaps simply using a file on the bent acrylic would work. The filed acrylic would hopefully create a rough texture with a pattern hard to reproduce/imitate precisely, especially in combination with the “fleck glitter” patterns. Heating the acrylic would then hopefully melt the rough texture created using the file, and so destroy the security patterns formed through the filing. Either some types of thin acrylic, or all types of thin acrylic, appear likely to shatter; I've gathered this from the comments of others; such easy shattering could add to the tamper-evident mechanism.
    This idea works along the hypothesised principle of it being difficult to conceal unnoticeably cracks and topological cuts in transparent glass/plastic. It is considered that this might not be true (for example) with glue-gun glue: it could be possible to cut a solid piece of glue-gun glue in two, and then bond them together (through slight melting of their ends) so that there is hardly any evidence that the piece was ever cut in two. This idea could mean that tamper-evident mechanisms that rely solely on glue-gun glue, might be able to be circumvented through such cutting and re-bonding. Adding a swirl of contrasting colours (perhaps using paint) to the glue-gun glue, as well as adding glitter (glue sticks are actually available that already have glitter in them) could improve things, but still, it might be that even with such measures, such tamper evident mechanisms would still be able to be circumvented in the just-mentioned way. One possible way to overcome such defects and weaknesses, is to mix wax in with the glue-gun glue (maybe multicoloured wax) so that the wax regions and the glue regions in the conglomerate, have boundaries escaping easy encapsulation. Such a mechanism would rely on the fact that wax has a lower melting point to the glue-gun glue. Adversaries trying to do any topological cutting and then re-bonding, would probably then have a rather tough time concealing such cuts because the melting and re-bonding process would be hard to control as a consequence of the wax becoming runny in the melting of the glue.
  • Rather than a rigid container that simply has a lid, a special container that falls apart completely when it is opened perhaps would improve security, because there would then likely be greater destruction of the security patterns. Something like a house of cards where if one of the cards at the bottom is taken out, the whole structure collapses. Or perhaps like a cardboard box that easily collapses to cardboard only existing in one plane, from a 3D box to a 2D cut-out for the box. Using hinges might be useful in the construction of such containers. Following this same idea of a container falling apart, some material that when it develops a crack, cracks all over the place such that the whole structure disintegrates to some extent, could also improve security along a similar vein—a bit like an egg shell but maybe more fragile. Certain kinds of extremely fragile glass perhaps have such properties.
  • Transparent hair gel might have been suitable, but then it might be possible, with very sophisticated equipment, to lift out a section of the gel, and then put it back-in, without any disturbance to the security patterns occurring.
  • In respect of inducing a greater destruction of security patterns during the retrieval of secured items, perhaps having the item to be secured placed in liquid, where a buoyant layer of solid bits is trapped beneath the secured item, and a sinking layer of solid bits is trapped above the secured item, could work. The natural tendency is for the sinking layer to go to the bottom, and the other layer to the top, but for the security device the layers are in some sense in the opposite places to “where they want to be”. When secured as such, there is a natural tension between the layer of bits “wanting” to go up, and the layer of bits “wanting” to go down. Removing the secured item perhaps invariably causes such disturbance that both layers get disturbed with the layer wanting to go down, to some extent (perhaps completely) going down, and the opposite with the other layer. The security patterns then would thereby be destroyed, hopefully more than with other methods, and perhaps even in a total way. Such a set-up would perhaps solve the recent problem I’ve been experiencing concerning the security patterns not being sufficiently destroyed. The theories behind the sedimentation of sea beds, and the use of centrifuges, seem applicable here.
  • If it is possible to surround items to be secured with long-lasting bubbles that keep their positions, then that could perhaps be a very secure unrepeatable-patterns tamper-evident mechanism. Getting to the secured item would invariably destroy or at least alter the configuration of the bubbles. Bubbles with long lifetimes seem possible (see http://www.recordholders.org/en/list/soapbubbles.html). Instead of bubbles, a kind of honeycomb-like architecture formed by the soapy material of bubbles, within the cavity between a secured item and the outer casing of a transparent rigid plastic box, might may also work.
  • A plastic box containing some fragrance that is lost just a few seconds after opening the box, could perhaps provide security. To overcome the potential attack of an adversary deceptively putting the same fragrance back into the box, a very unique fragrance could perhaps be used. Such a mechanism would probably thwart the attack where there is the inducement of a different "air" composition in the opening of the box through the use of laboratory conditions, contrived so as to try to circumvent the security mechanism. For example, such attacks perhaps would work where colour-changing chemicals form part of the security mechanism, and where colour changing doesn't occur when the box contents is exposed to pure nitrogen (instead of air). The problem with using a unique fragrance is how to make it such that an adversary is unable to duplicate it. Anyway, this idea exploits the nature of gasses, and that once a container of gas is opened even for a very short amount of time, it's likely very difficult to prevent at least a bit of it escaping. This general security idea of using gases in such ways, appears to be important, and perhaps can be adapted for a variety of security mechanisms.
  • Another potential way to use balloon or similar stretchable material, is to stretch the rubber-like material so that it is stretched under tension completely around an item to be secured, with cut short strands of human hair randomly placed underneath the material to cover the entire surface of the item with visibility of the hairs on the other side of the stretched material, and so that the material is closed (perhaps tied) so that it persists in a stretched and unchanging state around the item to be secured. It's a bit like putting the item to be secured in stretched tights (perhaps). The hair placements, visible even though underneath the stretched stretchable material, would constitute the security patterns. Short strands of hair perhaps constitutes good unrepeatable-pattern material because of its nature of being easily changed by slight things in combination with also having some persistence in respect of its positioning. Because the stretchable material is stretched around the item to be secured, trying to get to the secured item would likely inevitably make part of the material un-stretch and such un-stretching would likely disturb the security patterns. Trying to recreate the same patterns would likely be extremely difficult because of the conflicting forces between the unstretchable yet flexible hair and the stretchable material (such conflicts existing particularly whilst the stretchable material is being stretched to completely go around the item to be secured). Also, repairing cuts and tears in the stretched material, made by adversaries in order to get to the secured item, would likely be hard to conceal, and also of itself, might cause significant security-pattern disturbance (and so increase the level of tamper evidence).

Misc. notes

[edit source]

     MarkJFernandes (discusscontribs) 14:40, 25 January 2021 (UTC)

«Mind-reading attacks» chapter   (chapter 6)

[edit | edit source]

«Mind-reading attacks» chapter   (chapter 6)

Broaden chapter so as to include mind-reading performed using psychological techniques (such as reading behavioural cues)?

[edit source]

The legacy of focusing on psychic powers, has meant that when the chapter name was broadened to "Mind-reading attacks", a vacuum came to exist in respect of other forms of mind-reading; in particular, psychological mind reading is not mentioned at all. Therefore, it might be a good idea to add some information on psychological mind reading.

MarkJFernandes (discusscontribs) 17:11, 8 May 2020 (UTC)


The distinction of 'psychic' within the broader topic of 'thought reading and control' is perhaps pointless?

[edit source]

All psychic phenomena (defined as mind-to-mind activities mostly without the use of technology) can be mimicked using technology. Therefore, referring specifically to psychic things perhaps narrows things unnecessarily, and degrades the material. Also, reference to 'thought reading and control' which includes primarily using technology for such, will perhaps have more credibility as it is disputed as to whether psychic phenomena really do exist.

Same also applies to "Passwords and digital keys" chapter.

     MarkJFernandes (discusscontribs) 08:23, 2 June 2020 (UTC)



"Also, the use of security tokens (such as USB security tokens) ... can overcome such psychic attacks." → "Also, the use of MFA ... can overcome such psychic attacks."?

[edit source]

More general, and potentially more useful.

     MarkJFernandes (discusscontribs) 09:22, 2 June 2020 (UTC)


«Simple security measures» chapter   (chapter 7)

[edit | edit source]

«Simple security measures» chapter   (chapter 7)

Is sleep mode more secure than shutdown or hibernate mode?

[edit source]

Trammell Hudson briefly deals with whether computers should be completely shutdown, or suspended, in relation to security (see https://trmm.net/Heads_FAQ#suspend_vs_shutdown.3F). This comparison can be extended to whether computers should be powered-on, or powered-off, in relation to security. It may well be better for a computer to be powered-on as in such a state, it can be more difficult to carry out certain classes of attack. In conjunction with a computer being powered-on, computer-driven event logging can be activated, to provide even more security.

A 'sleeping' computer can perhaps be made more secure if the computer is designed to 'incinerate' security keys (cf. §"Destroy_key_when_attacked") in the event of a hard shutdown (opposite of graceful shutdown, such shutdowns possibly being instigated by intruders wanting to perform tampering on the computer whilst it's in a powered-off state) as well as in the event the computer detects system tampering (such detection being possible whilst the computer is turned-on, at least to a certain extent). This can possibly be implemented by moving (and not simply copying) the keys from non-volatile memory (such as a TPM, BIOS/UEFI firmware, or system disk), to volatile memory, whilst the computer is in operation. Upon a graceful shutdown, the keys would then be moved back to non-volatile storage. With such a set-up, a hard shutdown would result in the loss of the keys. It should be noted that data can sometimes be recovered from powered-off volatile memory (see here for more about this); in light of such, perhaps certain kinds of volatile memory that properly get wiped when losing power, ought to be chosen.

Sleep-mode on computers can potentially be leveraged for higher security, when running an OS only from volatile system RAM[4]. A computer need not make use of system disks, or live CD/DVDs, for weeks and maybe even months at a time, by simply running the OS straight from volatile system RAM, and putting the computer to sleep during periods when it is not needed. This would likely improve security so long as the OS were properly locked (whether by use of passwords or otherwise) to prevent illegitimate users from doing normal user actions, because it is probably more difficult to tamper with volatile-system-RAM data in a constantly-on OS-locked computer than with data on non-volatile data mediums (such as system disks and live DVDs). The security would also be higher, because such a method of computing would also provide extra tamper detection and tamper evidence. It would likely be difficult to fiddle with the OS as loaded into volatile system RAM on such a computer, whilst the computer is on, as such fiddling would probably result in a corruption of the computer's state, and would then provide some tamper evidence and tamper detection; the computer perhaps would be 'frazzled' such that it required a reload into the volatile system RAM of the whole OS. To ensure better that such 'frazzling' takes place as a form of tamper evidence and tamper detection, the OS could be stored in volatile system RAM only as encrypted data. To be even more secure with respect to preventing the stealing of user data, unused portions of the volatile system RAM could be zeroed (securely wiped) before the computer is put to sleep, to prevent 'forensic' methods from recovering data deleted only using shallow-depth deletion methods (see here for info about data being able to be recovered from powered-off volatile RAM.) Such a system could perhaps be used in conjunction with a Raspberry Pi set-up, where in the eventuality that tampering were detected, a brand-new Pi set-up could be purchased (at a low price because Pi devices are cheap), and the old set-up sold-on as either spare parts, or as components potentially not secure. A Raspberry Pi set-up would also be good, as placing the system into "sleep" mode would likely not require much power, and because batteries, as opposed to a powered mains supply, could supply such power during the "sleep" mode, thereby overcoming attacks focused on disrupting the mains supply of electricity. This kind of set-up was proposed in the forum topic entitled "Secure computing using Raspberry Pi for business purposes" on the Raspberry Pi forums. Such a set-up could also potentially be used for storing cryptography keys, and certain files especially needing not to be corrupted, in such ways, that they are less prone to being maliciously corrupted than if they were stored in non-volatile mediums. These principles may be in effect with certain constantly-on servers, and it could be useful looking at the security principles in play for "permanently on" servers, to get further guidance regarding these things.

     MarkJFernandes (discusscontribs) 16:27, 3 November 2020 (UTC)


«Broad security principles» chapter   (chapter 8)

[edit | edit source]

«Broad security principles» chapter   (chapter 8)

Adapt business/work model to lessen impact of threats in threat model

[edit source]

A new broad principle that can be added to this section, might be the adapting of an entity's business/work model to lessen the impact of threats in the entity's threat model. For example, in circumstances where intellectual property theft is rife, you could change your business model so that you are not so dependent on intellectual property protection. This happens in the open-source community, where the business model is not so reliant on protecting intellectual property; instead, revenues are generated in ways probably mostly immune to attacks based on stealing intellectual property.

--MarkJFernandes (discusscontribs) 13:56, 17 April 2020 (UTC)


Mistakes in "Security measure of taking key out of self-locking padlock" photos

[edit source]

Mistakes were made in the taking of these photographs. The padlock should appear to be unlocked, and also ideally, probably the box should be opened (rather than closed).

MarkJFernandes (discusscontribs) 14:11, 21 May 2020 (UTC)


Relationship between “Destroy key when attacked” principle and military strategies

[edit source]

I've read through the list of military strategy and concepts at https://en.wikipedia.org/wiki/List_of_military_strategies_and_concepts, and also done some brief internet research, but can't find this principle distinguished anywhere. It is similar to 'scorched earth' policies but not quite the same because there's no retreating or advancing.

MarkJFernandes (discusscontribs) 14:13, 21 May 2020 (UTC)


Add "Ward off criminals by being public about your security" as a broad security principle?

[edit source]

Criminals can be warded off when they believe you have good security in place, especially when they believe they might be caught by the police because of your security. Should this be added to this chapter as a broad security principle? Maybe it's not broad enough and should be instead put in the "Miscellaneous notes" chapter. Or otherwise maybe it just isn't significant enough to be in the book at all.

It is quite related to the "Publishing security methods" broad principle, and perhaps should be mentioned in the documentation of that principle.


     MarkJFernandes (discusscontribs) 11:38, 6 June 2020 (UTC)


"Security by screaming" as broad security principle?

[edit source]

Perhaps there is a broad security principle that can be labelled as "security by screaming". Essentially, more security is attained by proclaiming to the world, almost in a screaming-like way, about the awfulness of your security compromises. In such fashion, attackers may be warded off from fear of being found out, possibly because of increased attention paid in their direction.

     MarkJFernandes (discusscontribs) 11:35, 6 June 2020 (UTC)


"Security through obscurity" contrasts the "Publishing security methods" broad security principle. The Wikipedia page for "Security through obscurity" gives justification for why publishing security methods is likely better.

     MarkJFernandes (discusscontribs) 11:31, 9 June 2020 (UTC)


Add "Security layers with differing credentials, for improved security of more valued assets" as broad security principle?

[edit source]

Adding this was an idea borne out of initial discussion of this book with the Qubes user "Catacombs".

Their idea was possibly to include information on "..nested encrypted folders...". They said that the idea was something like that after the hard drive's full-disk encryption, there would be a second layer of encryption for all of their highly-private letters by using an encrypted-folder mechanism (distinct from the full-disk encryption). The idea would be that a user would enter their credentials initially to get access to the operating system which would decrypt the full-disk encryption, but would not decrypt the encrypted folder. In order to get access to the encrypted folder, the user would have to enter a second set of credentials (perhaps a second password). In such ways, would there be something like access control to a building and then greater access control to a highly confidential room in that building. It was put forward by Catacombs that leaving "...information openly in the file structure..." was not safe. They further implied that such a security mechanism overcame such security weakness connected to not having strong security for information more sensitive than the average information on your system.

Catacombs said that in 2009, they had a MacBook Pro which had the ability to create a software-driven encrypted partition inside of the main file structure. They further added that reviewers felt the encryption used back then was quite good.

I suggested that a more general concept of "encrypted container within encrypted container" might be appropriately added to the book. I went further and said that perhaps an even more general concept of "encryption within encryption" should instead be added. Upon reflection of this latter general concept of "encryption within encryption", I realised that an even more general concept existed that included the physical security mentioned by way of analogy in the building-and-room analogy above. I've labelled this concept as "security layers with differing credentials, for improved security of more valued assets" because I could not find it singly labelled elsewhere. The concept is perhaps related to User Account Control (UAC) which is actually touched upon earlier in this book, in the "Regarding operating system" section, in the following excerpt:

"Some general security advice in relation to using an operating system, is for users to have an administrator account that is different to the standard account users use for everyday computing. The standard account has fewer privileges. The more powerful administrator account, that also has higher associated security risks, should also have a “minimised” exposure to risk due to it being used only when needed—when not needed, the less risky standard account is instead used."

It is also related to the concepts of security clearance and multi-level security but is not the same as either of these concepts.

     MarkJFernandes (discusscontribs) 15:15, 9 June 2020 (UTC)


Considered whether high-latency email should be mentioned in the "Time based" broad-security-principles section, or elsewhere in book...

[edit source]

Decided such a concept likely should not be mentioned in book. The concept appears to be more about having anonymity, and the book doesn't deal so much with establishing anonymity. Knowing how to be anonymous in computing doesn't appear to be that useful to everyday computing conducted by most users. It is perhaps more useful to certain fringe activities, such as the reporting of human rights abuses. Also, such things are very likely well documented elsewhere on the net, probably even being available as free resources.

     MarkJFernandes (discusscontribs) 09:21, 10 June 2020 (UTC)


Add information under "Geospatial" broad security principles, concerning potential security advantages gained by moving around?

[edit source]

Security advantages may be gained by moving around, and computing from different geospatial locations, especially if an adversary is focusing their attacks on a specific geospatial location. Such a strategy is probably documented as some kind of military strategy. The Qubes user "Catacombs" has said that such an approach might be useful for a country like China, presumably because of the totalitarian government there.

     MarkJFernandes (discusscontribs) 10:13, 10 June 2020 (UTC)


Improvements to §"Time based"

[edit source]

The subsection "Based on time taken to forge" should probably be placed under the section "Based on time passed" since it too is based on time passed: security is attained based on how much time has passed, on how much time has not passed perhaps. In a related way, the current content under "Based on time passed", might be best placed under a subsection of that subsection, called something like "Security derived from age". Another subsection could be created under the "Based on time passed" subsection called something like 'based on security credential expiry date'. For example, you may wish to use a private key to create new mobile phone lock passwords that expire at the end of each day (perhaps simply by PGP signing the day's date). If an adversary were to capture the password, perhaps due to your unlocking your phone in a public shopping centre, then because the password would expire at the end of the day, it might mean you still maintain a good level of security.


     MarkJFernandes (discusscontribs) 09:21, 2 July 2020 (UTC)


X-ray and T-ray probably should always have the initial letter capitalised....

[edit source]

X-ray and T-ray probably should always have the initial letter capitalised. If this is the case, correct the mistakes where this has not happened not only in this chapter, but in any other places in the book.

     MarkJFernandes (discusscontribs) 08:06, 16 July 2020 (UTC)


Rename §⟪Relying on high production cost of certain security tokens⟫ → ⟪Using high-cost-to-forge barriers for greater security⟫?

[edit source]

Such generalisation in this "Broad security principles" chapter is generally desirable because the chapter is focused on broad/general principles. It does appear likely that the proposed new name constitutes a distinct and genuine broad security principle.

If such renaming took place, the previous body would perhaps then be again placed under the ⟪Relying on high production cost of certain security tokens⟫ heading but the heading would instead be a sub-heading under the suggested and more general heading of ⟪Using high-cost-to-forge barriers for greater security⟫. Also with such renaming, the "Cryptocurrency-like mining to increase trust" inventions could then be appropriately linked-to as being categorised under the new heading.

Cheap SD cards are a security risk partly because of how cheap they are. An adversary perhaps can replace 1000 cheap SD cards with deceptive espionage-tech-laden fakes without too much difficulty because of their low cost. The same could also be true with BIOS EEPROM chips. However, replacing 1000 expensive SD cards, where the greater expense can be verified using vigorous checks on the higher capacity and/or speed of the SD cards, is probably much more difficult. For both SD cards and EEPROM chips, such greater expense perhaps could also be established by filling each one with a blockchain signing the chip's serial number or some other suitable identifier. The ideas of this paragraph could perhaps be placed under another sub-heading of ⟪Security by costly verifiable features in device⟫. Interestingly, gold-plating EEPROM chips and the like, could perhaps provide such greater security. There would have to be some way for users to authenticate that the gold were genuine, and there appears to be much information on the internet regarding testing the authenticity of gold—see https://www.wikihow.com/Tell-if-Gold-Is-Real.

This idea of there sometimes being more security due to the higher costs associated with forging, might lead one to believe that the CPU in a computer system is less of a point of attack than the embedded controller in the same system: it's probably generally cheaper to create a fake EC processor than to create a fake CPU. Following a similar line of thought, system-on-a-chip systems (SoCs) may provide a security advantage over other systems having greater numbers of individual components able to be, after manufacture, physically separated and replaced: if you want to put a backdoor in the CPU (for example), you still have to go to the expense of replicating the rest of the SoC's functionality for SoC-based systems; on the other hand, if you are not targeting an SoC-based system, you can just create a fake CPU which would likely be cheaper than making a whole fake SoC.

     MarkJFernandes (discusscontribs) 16:58, 15 December 2020 (UTC)


Perhaps mention 3D printers, and FPGAs programmed as CPUs, in §⟪DIY security principle⟫?

[edit source]

3D printing would seem at times to be an application of the DIY security principle. By 3D printing hardware and other physical objects, you can probably be more confident regarding the integrity of the printed items (especially in respect of there being no hidden espionage tech or other hidden "maltech").

On another note, more related to microchips, FPGAs can be programmed to function as CPUs, in a DIY way, such that certain CPU attacks (such as via hardware backdoors) can be thwarted (see "Verifiable CPU" section at https://www.bunniestudios.com/blog/?p=5706).

These thoughts can perhaps be mentioned in the §⟪DIY security principle⟫. However, because the ideas are quite concrete, perhaps they should be placed elsewhere in the book, either in addition or instead.

     MarkJFernandes (discusscontribs) 09:45, 21 October 2020 (UTC)


How to compare live OS discs obtained using multiple channels, when you have no trusted OS....

[edit source]

In respect of §⟪Using multiple channels to obtain product⟫, a scenario may arise where you have what should be multiple copies of a live OS disc, obtained using multiple channels, but no trusted OS that can be run to do byte-for-byte comparisons of the discs to make sure they are all the same. In such a situation, you can do some form of checking by loading each disc in turn, and then within the OS session loaded for each of the discs, byte-for-byte comparing all the other discs to the particular disc loaded. In such a scenario, none of the OS discs are trusted, but the chances that all of the discs have been compromised, is quite low, and you can leverage such probability, to reach some level of confidence that none of the discs have been compromised, if such be the case, whenever all the just-mentioned byte-for-byte comparisons throw-up no differences (pass successfully).

This principle was developed for a Raspberry Pi project attempting to establish a secure computing environment for business purposes—see here for more about the project.

     MarkJFernandes (discusscontribs) 15:15, 29 October 2020 (UTC)


Is there a broad security principle based on having a cheap set-up?

[edit source]

There may be a broad security principle based on having a cheap set-up. Such extra security was touched upon in a Raspberry Pi project attempting to establish a secure computing environment for business purposes—see here for more about the project. Essentially, the security advantage I am discerning (that I think probably constitutes a broad security principle), is that if there is ever sufficient reason to believe that such a cheap set-up has become compromised, the user is then able to purchase a brand new non-compromised set-up at a low cost, with the possibility of selling on the old set-up as either spare parts, or advertised as a potentially-compromised system. You could perhaps do the same with an expensive set-up, but the risk of not being able to find buyers for the old system, together with the greater absolute loss incurred when more expensive goods become second-hand, could mean that the financial risk of catering for such contingency is simply too much to bear.

     MarkJFernandes (discusscontribs) 16:20, 29 October 2020 (UTC)


Do avoiding "bells and whistles", trying to be "barebones", and reducing power & capability, constitute a broad security principle?

[edit source]

Such is touched upon in a Raspberry Pi project attempting to establish a secure computing environment for business purposes—see here for more about the project. It is also touched upon in the geospatial broad security principle, when it is mentioned that a user may want to reduce their power and capability by not unlocking their phone in public places. It is also touched upon in other areas of the book (such as in the "Software based" chapter in the consideration of whether the Raspberry Pi Zero device could be used as a secure downloader).

Having "bells and whistles" simply increases security concerns, and when having high security is important, doing away with them when possible is likely a good idea. By moving in this direction, you may end-up with a system that is fairly bare-bones like perhaps some of the Raspberry Pi products, some of the products conforming to the 96Boards specifications, and some very basic non-smart mobile phones (that perhaps are better to use for secure downloading).

Reducing power and capability, seems to be something of a parallel concept to trying to be "barebones". Essentially, security is leveraged at the cost of reducing power and capability. Why leave certain computer ports exposed when you don't really need them? Perhaps disable them for increased security, at the expense of reducing your power and capability.

     MarkJFernandes (discusscontribs) 16:49, 29 October 2020 (UTC)


Add new broad security principle of "Using an intrusion-detection-and-recovery-from-intrusion approach instead of just a tamper-prevention approach"?

[edit source]

Whilst in an ideal world, preventing tampering absolutely might be desirable, realistically, a security approach of intrusion detection coupled with recovery after the detection of such intrusion, might be better. To prevent absolutely all forms of tampering might simply be too costly. Also, it might not have that much of an impact when the probability of tampering is very low. In such regard, it might be easier, and more beneficial, simply to detect intrusion, and then when intrusion is detected, to re-establish your system(s) so as to "eject" out any possible tampering from your system(s).

When using an intrusion-detection-and-recovery-from-intrusion approach, you may want to use cheap components, so that if intrusion is detected, it is not too costly to replace the components with new components that you know have not been compromised (see the "Is there a broad security principle based on having a cheap set-up?" note for more about this).

In respect of trying to lock down the code and data associated with OS installations, bootloaders, BIOSes/UEFIs, and data files, it may be much easier simply to detect intrusion where tampering may have possibly occurred, and then just to reinstall all the data and code from secure backups after such event. This approach is perhaps similar to instead of trying to establish that an OEM installation has no malware in it, simply reinstalling the whole of the OEM setup so as to have certainty over the security of the computer system. The "Digital-storage security through multiple copies of data" note is relevant to such an approach.

     MarkJFernandes (discusscontribs) 14:58, 3 November 2020 (UTC)


Add §⟪Size based⟫?

[edit source]

There appears to be broad security principles around the subject of size. In such a new section, there could be a subsection called something like "bigger things are harder to steal", and a subsection called something like "smaller things are easier to hide". These ideas appear to constitute broad security principles.

By following the principle concerning bigger things, you may choose (for example) to use a big tower desktop computer instead of a small laptop/netbook, because it is easier to spot someone stealing such a big computer than it is for a small laptop/netbook (can't put it "under your jumper" and walk out). Such a big computer may also be cheaper and easier to use when in just one location, which could constitute other reasons to go for such a computer.

By following the principle concerning smaller things, you may choose to store a large amount of data on an SD card that you hide in the lining in your jacket when you are travelling, rather than on a big external HDD/SSD drive. In such instances, greater security might be attained by using a smaller storage medium rather than a bigger storage medium (in contrast to the other size-based principle just mentioned).

These principles are briefly touched upon in the §⟪Physically removing storage component(s) from the rest of the computer system, and then securely storing those components separately⟫. The principles do not necessarily need to apply to physical size. They could, for example, apply to disk-space size; a key file may be easier to hide in email attachments using steganography if it is quite small; in contrast, it may be harder for an adversary to steal a key file through data transfer methods, if the file is extremely large.

Bigger things can be more costly to maintain and an easier means by which adversaries can launch "trojan horse" attacks. In respect of disk space utilisation, one thing perhaps to consider is the malware risks for software taking-up large amounts of disk space: malware checking takes longer, and there is a greater risk of failing to spot malware because of the greater complexity associated with larger space utilisation. Such size limitation as a security principle, is touched upon in the "Dealing with the situation where you want to work with potentially security-compromised equipment/software" note as well as in the security invention mentioned in the "Design feature for enabling the detection of malware in BIOS firmware" note on the talk page of the "New security inventions requiring a non-trivial investment in new technology" chapter.

     MarkJFernandes (discusscontribs) 10:58, 10 November 2020 (UTC)


Add §⟪Having thorough, great, and easy customisation in the building and maintenance of systems⟫?

[edit source]

Custom-building PC/system where in its building, as well as afterwards, great and difficult-to-predict customisation is possible, where components can be easily replaced using commonly-available components, appears to be a good idea.

Great customisation can mean that extra security mechanisms can more easily be implemented, such as replacing an opaque computer case with transparent materials for easier visual-inspection security authentications.

Easy customisation in the maintenance of systems, can mean that if it is suspected that a particular part may have been compromised, it alone can be easily and faithfully replaced cheaply—the whole system need not be "trashed", only the part to be replaced.

Thorough and great customisation can mean that adversaries cannot much predict beforehand what system the user will have. With prediction, the attacks of adversaries can be more focused, and can exploit the re-usability of attacks formulated previously; without it, adversaries may be at a loss as to what attacks will work even after finding out the system configuration, due to any pre-formulated "canned" attacks failing as a result of the system having been highly customised away from being vulnerable to such "canned" attacks.

Not 100% sure these ideas constitute a broad security principle.

     MarkJFernandes (discusscontribs) 13:47, 10 November 2020 (UTC)


Concerning §⟪User randomly selecting unit from off physical shelves⟫,      and add §⟪Anonymity based⟫?

[edit source]

After trying to put into practice the "User randomly selecting unit from off physical shelves" broad security principle in respect of securely acquiring a smartphone (as advocated presently in the advice given under the §⟪Getting an uncompromised smartphone and obtaining software with it⟫ of the "Software based" chapter), I have run into a few snags. Unfortunately, it appears most physical shops in the south east of England (UK), do not have smartphones actually on shelves where a user can personally pick units themselves with their own hands. Some stores (including the Carphone Warehouse as now currently merged with PC World), will have staff go and get the unit for the model that you pick-out in the front of the store. Unfortunately, this is open to attack by store staff, and completely undermines the security advantage highlighted in the principle. The lack of ability to buy mobile devices by making use of this principle, coupled with the research discovered in the writing of this book, makes me strongly suspicious that such inability to make use of this broad security principle, is a way to leave open the possibility of hacking phones targeted at certain individuals and groups. I was hoping PC World, being a big physical store, would lend well to this security principle, but alas this does not seem to be the case. This notwithstanding, the broad security principle might still be able to be used at certain wholesaler warehouse-type stores such as Costco; however, from photos of the inside of Costco stores, it does appear that probably they too do not keep actual phone units in the main customer area. Being in the midst of this COVID-19 crisis, and with the renewed and even frantic push to switch to online retail, it might be that this broad security principle will not be so good for securely acquiring phones from this time onwards, at least in the south east of England.

Not all hope is lost though, in regard to real physical in-person shopping. Amazon is innovating a new kind of store known as Amazon GO, which is advertised as being a cashier-less kind of store. It is scheduled soon to arrive in the UK, and its aspect of not having cashiers, may mean this broad security principle of random selection might become effective. The hurdles involved in hacking the technology, will likely make hacking it non-existent for the case of targeting individuals with "dodgy" phones.

There is probably a broad security principle that lies in being anonymous. When a person does things with anonymity, it can be harder, at times even impossible, to individually target the person, and this can result in greater security. In any case, somehow the following ideas should be added to the book, whether in a new section for such a broad security principle, or otherwise. The ideas have a strong effect on the advice given in the §⟪Getting an uncompromised smartphone and obtaining software with it⟫ of the "Software based" chapter.

Using the Amazon Hub Locker service in conjunction with Amazon-fulfilled orders, is probably secure if you do not include any identifying details on the delivery address (such as your name). At the fulfilment centre, the Amazon processes are likely to be secure such that staff have no awareness of what order is going to which customer. If delivering to Amazon Hub locker, in manner just mentioned, the delivery staff/driver, will quite likely not know for whom the parcel is. After delivery, the security at the Amazon Hub locker (which uses a digital unlocking pass code sent to the buyer, as well as the aspect of there being several lockers at each site into which the delivery might be delivered [making it harder for individuals to figure out which locker needs to be broken-into for the purpose of targeting you]) will likely be secure enough to prevent people from getting to your locked item. When you receive the email saying that your delivery has arrived, you should not look at the email until you have arrived at the locker. When you arrive, you then look at the email and get the unlock code and locker number (if on the other hand you look at the email quite a while before arriving, someone may be able to intercept the code and locker number whether by means of clandestine photography or psychic interception). The email delivery should be secure enough if you use a mail server that insists on encrypting the transit of emails whenever the mail server on the other end is set-up for such capability (GMAIL servers are examples of such servers), and if also the Amazon mail server has the same behaviour (I would be very surprised if the Amazon mail servers didn't automatically use the standard mail-server-communication encryption, for emails sent to mail servers able to communicate using such encryption technology.)

It is likely important that a big business such as Amazon is used, partly because small businesses don't necessarily have as developed security practices and measures. For example, if you shop on the website of a small business, they might spy on your IP address, and in some cases, use that to target you. Such is probably unlikely with a big business like Amazon because of the likely many technological and organisational barriers to such activity. You could possibly overcome IP-address-based targeting, by using anonymity-oriented practices such as using a short-lived dynamic IP address (for some set-ups, if you just restart your broadband router, you'll get a new IP address), or a VPN.

On the internet, it is advised that greater anonymity can be leveraged by paying for Amazon purchases with Amazon gift cards rather than with a bank card registered to your address and person. Not sure whether such is necessary, but it could help—using gift cards does seem like a useful idea for generally remaining anonymous across all types of shopping (not just in respect of Amazon shopping).

Interestingly, I looked into whether buying from a third-party seller on Amazon might be secure enough for my purposes, simply because I wanted to save money, and it turns out that buying from a third-party seller might be even more secure—it is certainly a potential way to save money when trying to acquire goods securely through Amazon. Presently, you still need to make sure that the order is Amazon-fulfilled, as otherwise you are not able to use the Amazon Hub Locker service; such use is required for attaining the needed anonymity. Amazon customer support on 23rd November 2020, said that only the name and delivery address would be passed on to a third-party seller when buying from such a seller through Amazon, and that in particular the email address, phone number(s), bank card details, and billing address would not be passed on to the seller. When using the Amazon Hub Locker system, you should (as articulated above) make sure the delivery address doesn't include your name. In addition to this, when buying from a third-party seller, you should also make sure the name data (outside your delivery-address data) doesn't give indication of your true identity, since the name data might be passed onto the third-party seller and that third-party seller might not be trustworthy (they will quite likely not be as trustworthy as Amazon). Brief internet research, as well as my prior experiences, seem to indicate that it is likely legal in the UK to use a pseudonymous alias in purchases. To effect such use for such purchases, you will need to change the name data both in the delivery address and your name fields to some alias name that doesn't much identify you, and that also doesn't arouse any suspicions—using a name that might be commonly found in the society in which you live, but that isn't too obvious (perhaps avoid names like Joe Bloggs or John Doe?) might be a good idea. Fortunately, from my analysis of the "Amazon Conditions of Use and Sale" terms dated 29.1.2020, that version of their contract allows for the use of pseudonymous aliases. I can imagine that many Amazon customers want to buy anonymously, and that Amazon has baked this facility into their shopping experience. It's easy to change your name data by simply going into the Amazon account settings for your account, and making the appropriate changes. By following the measures outlined in this paragraph, the third-party seller should hopefully be oblivious about who the purchaser is behind each such order, and so will hopefully not be able to target you on an individual basis (nor pass your details to others for any such targeting). You should then have enough security to buy digital electronics goods such as computers and smartphones, securely.

It should be noted that the Amazon Locker Hub service, appears to have Amazon lockers all over the place where I am in Essex, England, UK. Such widespread proliferation, can increase security. You can, for example, keep on changing the destination locker for each new order you make. You can potentially also choose a locker quite distant from you if you suspect you are being targeted based on local geography.

UPDATE. After applying the principles outlined here regarding making anonymous purchases using the Amazon Locker Hub service to set-up a trusted, low-cost, secure, basic, barebones "Raspberry Pi Zero"-powered system (over the 2020-21 winter), I have reached the conclusion that it is likely that somehow the security of the Amazon Locker Hub service was compromised in my purchases. In particular, keyboard remote control seems to have somehow been achieved. Not sure which component was compromised to achieve such remote control, but if one component was compromised, then any of the other components also could have been compromised in the same or similar ways. Because I was very careful to secure physically the system components when they were in my possession (especially at my premises), I am led to believe that the Hub Locker service was compromised somehow. Keyboard remote control seems to be a particular kind of attack that at least I have experienced often across various computing devices. No idea what is the right next step to take now, as to some extent I appear to have exhausted all avenues. Fortunately, because I baked into the 'protocol' the keeping of financial costs as low as possible, in terms of capital expenditure, I have not lost much in terms of money spent—the Pi Zero device is about as cheap a general-purpose brand-new computing device you can get.

     MarkJFernandes (discusscontribs) 12:30PM GMT, 17th February 2021


[edit source]

See https://www.raspberrypi.org/forums/viewtopic.php?f=41&t=286049&p=1731799#p1736148

     MarkJFernandes (discusscontribs) 17:24, 30 November 2020 (UTC)


Add information to this chapter on connections between component popularity and "trustability", and design-modification difficulty and "trustability"?

[edit source]

"...Yes, but then the more generic and popular a component is, the greater the review by users is perhaps? Hardware doesn't change often, and many ppl use certain popular CPUs, perhaps leading to some level of trust ("if you haven't heard of any problems with it yet, it's probably okay")? ..." - https://www.raspberrypi.org/forums/viewtopic.php?f=41&t=286049&start=50#p1737381

     MarkJFernandes (discusscontribs) 17:44, 30 November 2020 (UTC)


Substantiation for "Minimally-above-average security" broad security principle

[edit source]

https://security.stackexchange.com/a/2956/247109

     MarkJFernandes (discusscontribs) 14:52, 4 February 2021 (UTC)

«What to do when you discover your computer has been hacked» chapter   (chapter 9)

[edit | edit source]

«What to do when you discover your computer has been hacked» chapter   (chapter 9)

[edit source]

When this book is finally published, the link set for the heading "When to change digital passwords and keys?" should be converted into an icon-based absolute-address link (or perhaps a link that uses the "Wikibooks:" prefix [using such prefix still results, in some respects, in an absolute address]). Unfortunately, relative links for images (such as icons), do not seem possible, which would have been ideal.

The current method of linking is something of a temporary workaround. It has the disadvantage of somewhat ruining the visual aesthetics on the page.

When this book is finally published in its permanent address location, the absolute link set in the heading "When to change digital passwords and keys?" will need to be accordingly updated.

A text-based link is now being used such that a relative URL is being used, meaning that there are no concerns when moving this book to a different location.

MarkJFernandes (discusscontribs) 15:22, 21 May 2020 (UTC)


Add information about using a Faraday cage/shield with a potentially security-compromised computer?

[edit source]

A Faraday cage/shield can be used to shield unwanted EM communications from and to your computer, such communications perhaps arising due to malware or other kinds of hacking. This sounds like worthwhile information to add to this chapter somewhere.

Electrical conductors are used in Faraday shields. Metal is such a conductor and specifically, aluminium foil has been used for such shields (in respect of RFID blocking for mobile phones).

A potentially cheap solution, that also might not require purchasing anything new, for building a Faraday shield for a computing device such that you can still use the device whilst the shield is on, is as follows:

  1. Take your computer with you into a metal car.
  2. Shut all the openings of the car.
  3. Add foil to cover-up those window areas where radio signals can get through.
  4. Then use the computing device in the car.

Hopefully, Bluetooth and WiFi will be blocked by doing these things, during your use of the computing device in the car. The just striked-through idea is unlikely to work since slight gaps in metal shielding cause such shielding to fail.

Perhaps a cheap alternative is to use an emergency thermal metallic disposable sleeping bag. You can place your laptop in the bag, and then have the bag extend over your head and at least such that it covers part of your body. Most likely, radio signals would be blocked by taking such measures. Again, this idea is unlikely to work because gaps in metallic shielding cause shielding to fail. Metal apparently reflects RF communications, and in some cases, it seems it has the potential of increasing signal strength rather than decreasing it. If this idea were modified so as to include saline water, perhaps saline water in a foot bath, with the computer user covered in the sleeping bag from head to toe, and with their feet in the foot-bath water, then perhaps this idea might work. The saline water apparently absorbs RF radiation instead of reflecting it. In fact, this attribute of saline water can probably be used in the drawing-up of other solutions to this general problem.

Using a foil emergency thermal tent may provide a cheap way to construct a Faraday cage/shield in which a hacked PC can be 'safely' used (without having to worry about successful EM communications being performed by malware on the hacked device). Their inexpensiveness is derived from the fact that they can be very cheap to buy. Worrying about MITM attacks to the material perhaps is unnecessary because tests can be performed when the tent arrives to confirm that the material is sound. Gaps in the tent might cause the shield to fail. In this respect, using aluminium tape to tape up the gaps might be a good idea although doing so might cause breathing problems to those in the tent.

Alternatively, using transparent or semi-transparent shielding material might allow the construction of a Faraday cage/shield, where only the PC needs to be inside the cage/shield (because users can see through the material to see keyboard keys and the VDU). Such materials may be mesh materials such as a copper-mesh material. See here for more information on such materials.

A variation on this, is to use EM absorbers rather than reflectors, at least for part of the shield's function. It would seem that such shields wouldn't then be called Faraday shields, but they would still be effective shields for some kinds of EM radiation. Saline water is apparently an EM absorber (due to its electrical conductivity), conveniently is also transparent, and is also cheap, meaning that it can perhaps act as an EM-shield screen filter allowing the seeing of a computer's screen whilst at the same time filtering out RF emissions, at a low price point. Getting a suitable container for the liquid is perhaps not so straight-forward: the container must not let out water, and for laptops without an external keyboard, must leave enough room to be able to use the laptop's integrated keyboard. An A4 clear pencil case might be such a suitable container and also quite cheap. If there are concerns about it leaking, perhaps thorough leak-detection testing, the use of water-proofing sealants, and the precautionary additional use of protective water-resistant computer covers, can go some way towards allaying such concerns. Also the conductive gel used for things like ultrasounds, perhaps can also be used for shielding the screen area such that you can still see the screen (again using the EM absorption principle). Such gel can be very cheap to buy, and apparently you can even easily make your own DIY gel—see here. Have tried DIY gel using Aloe Vera hand-wash with salt. Does work to block cellphone signal but quite a bit of salt needs to be added and when mixed with hand-wash, the hand-wash turns cloudy, so doesn't appear suitable (because no longer transparent nor translucent). Have also tried with saline water, and saline water does work (for cellphone signals); again, quite a bit of salt is needed. Have successfully used Dr Oetker gelatine, mixed with salt and water, to create a firm transparent gelatine layer through which one can easily see a smartphone screen, and that shields mobile-phone signals (and presumably also Bluetooth and WiFi). Such gelatine may already be available in a user's cooking cupboard, but even if it has to be bought, the cost of it in respect of the amount required for the shielding, is quite low. Being able to use it on a partly upright large computer screen is perhaps tricky. What is maybe needed, is some kind of transparent see-through plastic mould big enough to fit over such screens, for holding the gelatine. It appears that if the gelatine is too firm, the shielding doesn't work, probably because the conductivity is then not strong enough—water conductivity apparently isn't so much present when the water is solid (like ice). Bought ¼ litre bottle of Aquasonic 100 Ultrasound Transmission gel produced by Parker Laboratories Inc. from Amazon[5], at the low price of £3.95. The gel indeed shields a mobile-phone signal, is usefully clear, and also usefully firm. However, the gel is lumpy, such that when you look through it at a computer screen, what you see is mostly too distorted. Have tried to liquidify the gel so that it can be set without there being any lumps, but without success. I tried microwaving it, as well as thinning down a diluted solution of the gel with water, but all to no avail. Simply thinly "painting" the gel (such that there aren't any significant lumps) also doesn't seem to work— perhaps the barrier must have a thickness greater than simply a thin layer? Melting the gel using conventional heating methods (such as over a stove), might be more successful in melting the gel, such that when the gel is later cooled down, depending upon the container, it is able to be set without any lumps. If such melting and setting is possible, then perhaps a glass pane, maybe one from or even within a photo frame, can be re-purposed to construct the suitable transparent container for the gel setting. Have successfully pinned Ultrasound gel between two transparent materials, such that distortions caused by lumps in the gel disappear. However, a new issue has now been brought to light, and that is the presence of bubbles in the gel, even after such pinning, such that the shield fails because of the lack of protection at the locations of the bubbles due to their too large sizes. Have tried heating-up the gel to see whether the heating-up process might get rid of such bubbles, but found it to be ineffective. Have also tried diluting the gel, and then progressively thinning it back to its original consistency, to see whether that might get rid of the bubbles. That too didn't work. Might be possible to get rid of the bubbles if the gel is thinned to more of a watery state and used in that state but then there's the issue of leakages due to the lower viscosity of the material. Such measures might pan out as a workable solution but more investment would first need to be made in the area of preventing leaks in the container. Another solution to such issues, is probably to pin the gel between two sheets such that there is a fairly thick space between the sheets, in such fashion that the sizes of yielded bubbles are too small and hopefully not clustered enough, to cause the EM shielding to fail. Perhaps a novel way to construct an "RF absorbing gel/water" container that is used over a screen in order to shield the screen in such fashion that the screen still remains visible to users, is to use double-glazed windows for the container. The top of the windows would be sawed-off, and the water/gel would then be poured into the gap(s) of the double-glazing. Double-glazed windows appear to be available at low prices. Also, old windows, that might be otherwise thrown on the scrap heap, might be available at even lower prices (perhaps even given away for free). Information on Wikihow about how to make a Faraday cage can be found here. In particular, the Wikihow information indicates that having layers in the shielding such that the layering alternates between electrical insulation and electrical conduction, results in stronger shielding. Also another thing to consider, is that when choosing conductors, using materials with higher electrical conductivity will likely result in stronger shielding—see here for a table comparing the conductivity of different materials.

Useful comparison of different RF shielding techniques: https://mosequipment.com/blogs/news/the-results-are-in-video-comparison-of-various-competitors-shielding-effectiveness

Such information is relevant to the Wireless Communications chapter, and perhaps should also be added there (see the note about adding such info on that chapter's discussion page).

     MarkJFernandes (discusscontribs) 08:15, 27 June 2020 (UTC)


Can sell computer for spare parts, in the situation that you are unable to "clean" the entire system (which can help to recoup losses)

[edit source]

If you discover your computer has been hacked, or may have been hacked, you can perhaps sell the computer in order to recoup losses, perhaps in order to finance the purchase of a new system. In the case where you are unable to make sure the system is "clean" prior to selling, you can possibly sell the system in the form of spare parts, in order to avoid the ethical issue of selling on a hacked system (that could represent compromised security for the purchaser of the hacked system).

     MarkJFernandes (discusscontribs) 09:31, 18 September 2020 (UTC)

«Miscellaneous notes» chapter   (chapter 10)

[edit | edit source]

«Miscellaneous notes» chapter   (chapter 10)

On 31.3.2020, I 'raked' the entire National Cyber Security Centre (NCSC) website.....

[edit source]

On 31.3.2020, I 'raked' the entire National Cyber Security Centre (NCSC) website (www.ncsc.gov.uk) for material relevant to this book. Hyperlinks to all relevant material have now been appropriately incorporated into this book.

MarkJFernandes (discusscontribs) 14:18, 21 May 2020 (UTC)


Using unrepeatable-patterns security and deep-fake-resistant videos, to induce user trust regarding the manufacture of an acquired unit

[edit source]

Unrepeatable-patterns security can be leveraged, together with `deep-fake-resistant video` technologies (see talk-page note here regarding security ideas concerning deep-fake-resistant-video technologies) to induce such trust. What would happen, is that the manufacturer would use 'unrepeatable patterns'-security patterns in the workshop/factory during the manufacturing of the said unit. The patterns could be in the equipment, as well as on the walls and floor. They should though, importantly be on the unit being constructed. The whole manufacturing process of the unit would then be videoed using deep-fake-resistant technologies. The final video could additionally be cryptographically signed by the manufacturer, perhaps additionally with an 'authentication cryptocurrency coinage' blockchain to increase trust even more. The video would be securely sent to the receiver of the unit. Such video would serve as some amount of proof of the integrity of the manufacturing process used in the manufacture of the particular unit received by the receiver.

     MarkJFernandes (discusscontribs) 07:54, 16 October 2020 (UTC)


JTAG might help in overcoming deep hardware hacking; mention it in §⟪Deep hardware hacking?⟫ ?

[edit source]

According to Wikipedia, JTAG is an industry standard for testing PCBs post manufacture and for verifying designs. It seems that it might be possible to leverage the standard to detect when certain microchips are not what they should be (especially in respect of the JTAG boundary scan technology), and so to overcome certain deep-hardware-hacking attacks. So it may in fact be advantageous for JTAG technology to be present in a computer system, so as to be able potentially to do such verification. In such regard, using a motherboard/mainboard that has a JTAG port may in fact be a good idea. In regard to the security risks associated with being able to inject malware into firmware via a JTAG port, perhaps the firmware can just be wiped and then reinstalled via the JTAG port (thereby overcoming any pre-existing malware).

In respect of the aforementioned, mention of JTAG should perhaps be made in §⟪Deep hardware hacking?⟫.

     MarkJFernandes (discusscontribs) 14:26, 3 November 2020 (UTC)


Perhaps mention IEC 61508 in §⟪Cybersecurity standards⟫?

[edit source]

Raspberry Pi Forums user karrika implied that studying the IEC 61508 standard, might be good in respect of establishing security. Perhaps mention it in §⟪Cybersecurity standards⟫? Not sure whether the standard is already adequately covered through the link to the Wikipedia page on cybersecurity standards.

     MarkJFernandes (discusscontribs) 16:16, 30 November 2020 (UTC)

New security inventions requiring a non‑trivial investment in new technology   [Appendix: Part 1]

[edit | edit source]

New security inventions requiring a non‑trivial investment in new technology   [Appendix: Part 1]

Design feature for enabling the detection of malware in BIOS firmware

[edit source]

Not sure whether such invention has already been discovered.

Given a set of operators O, a fixed-size memory S1 (BIOS firmware), a second fixed-size memory S2 that is blank when the computer starts (RAM), and a legitimate BIOS program stored in S1, find a maximal compression of values that fit neatly and tightly into S1 that also include the legitimate BIOS program, such that it is impossible for any program stored in S1 to display the total contents of S1 without simply doing a memory dump to screen of S1. Then build into the computer system a security verification sub-system that simply does a memory dump to screen of S1. The user has a copy of what S1 should be (perhaps from downloading it from the internet on another computer), and then compares the memory dump with the copy of what S1 should be. If there is a mismatch, security fails. If there is no mismatch, user knows that there is no malware in S1 so long as the hardware has not undergone any hardware tampering.

This mechanism relies roughly upon filling-up the BIOS firmware capacity "to the brim" with values, that cannot be compressed down any further (cannot be reduced to code that takes up less memory space). Physically disconnecting other components, such as the system disk, might be required. If there is changeable firmware in other components, could be possible for malware to utilise unpredictable data in other components to trick user into believing there is no malware. Not so sure how you would get round that. Perhaps being able to physically disable the other components would solve such issues.

     MarkJFernandes (discusscontribs) 16:23, 5 October 2020 (UTC)


Leveraging option ROMs and more generally the shadowing of firmware to RAM, for better security?

[edit source]

Firmware stored in ROM, can be a security risk due to physical hardware tampering. For example, EEPROM chips can be de-soldered and replaced with bugged chips that communicate data in a wireless way to nearby snooping devices. Also, auditing for correctness is generally difficult for the average computer owner, as it appears that it is generally required to create a specialised hardware set-up in order to dump the contents of the firmware in some manner where the contents can be verified (devices such as USB programmers are perhaps always needed).

With the foregoing in mind, option ROMs can perhaps provide better security, because the associated firmware is dumped to RAM and run from RAM. The contents being in RAM, means that no specialised hardware set-up is required to audit the firmware for correctness. It also means that hardware bugs in the hardware used for permanent storage of the firmware, can be overcome, because after the firmware is loaded to RAM, that hardware is no longer used (can even be unplugged if a ROM socket is being used)—the firmware is simply run from RAM. A counter argument to this latter justification, might present itself as "what about if the RAM is bugged?" For some reason, I'm inclined to believe RAM is more "trustable" perhaps because of it being such a common component to computing systems. Users can swap out RAM, but the same is not so easy with EEPROM chips that are pre-soldered to the mainboard. Being able to buy the RAM separately, and because of RAM likely being readily available in physical shops, the "User randomly selecting unit from off physical shelves" principle can be used to thwart targeted MITM attacks between the supplier and the end-user. Additionally, whereas without option ROMs, security-attentive eyes need to be kept both on the specialised storage used for firmware and the RAM together (in respect of clandestine hardware bugs, such as espionage hardware), with the above implemented, eyes only need to be kept on the RAM—the attack surface is effectively reduced.

Extending the above-described potential advantages regarding option ROMs, to all firmware in general, the BIOS firmware itself can also be driven in the same way—copied to RAM and then run from RAM. Incidentally, doing such, perhaps would make security patching of the firmware easier, as the firmware loaded to RAM could then just be patched through the OS during the OS boot. Researching on the internet just now, it does look like some form of BIOS shadowing does take place for speed performance considerations, but unfortunately, such shadowing is likely implemented by the BIOS code itself. If true, this would mean that malware present in the BIOS code would then be able to interfere with the shadowing process (which is undesirable). Instead, the shadowing process should be controlled purely by hardware, or by hardware plus code where the code is very highly secured and unchangeable (not part of the changeable BIOS firmware that potentially contains bugs and backdoors).

     MarkJFernandes (discusscontribs) 08:45, 9 October 2020 (UTC)

Example set-ups & implementations   [Appendix: Part 2]

[edit | edit source]

Example set-ups & implementations   [Appendix: Part 2]

Notes on example set-ups that probably ought to be filed under this Appendix part (in main content)

[edit source]
☞  

Use desktop or tower computer rather than smaller device (cheaper, size-based security, customisation-based security...).


☞  

Wiring security locked inside secured computer case; perhaps it's a bit like "bugging" your own system, but for your own security purposes rather than for spying on others?; ports (such as USB ports) otherwise exposed, can be pushed into case so that they are not exposed, and then locked inside the case along with attached cameras, microphones, etc.; by using a transparent computer case, a camera locked inside case can take pictures of what is happening outside the case; for security, all surveillance can be automatically pushed to cloud storage over WiFi, where security credentials only permit append and not delete or modification operations (think ftp directory permissions can probably be set-up for this); there may be data protection issues re. such CCTV-like tech though.


☞  

When building a new system, it may be worthwhile using a mainboard that has specific compatibility with particular custom BIOS/UEFI firmware for the provision of better security. For example, you may choose a mainboard based on Coreboot/Heads compatibility.


☞  

Investing in a great amount of volatile system RAM so that software can be installed and run simply from such RAM, may be a good idea with respect to achieving a security advantage by having and using non-volatile (persistent) storage as little as possible (such storage might end-up only existing in the small firmware ROMs).


☞  

To improve performance of live DVD system without incurring extra expense attached to obtaining sufficient system RAM into which the OS may be completely loaded, RAM caching can perhaps be used. Live DVDs can offer a few security advantages but may suffer from slow performance (which perhaps would be overcome if the OS were instead installed to a system disk [perhaps a HDD or a SSD]). A "halfway house" between completely loading the OS into system RAM, and the "unaccelerated" manner of using live DVDs, is to cache those uncached files that otherwise would reside solely on the live DVD disc, to system RAM. Such caching might be able to improve the performance of unaccelerated live DVD set-ups, so that it is close to the same performance as loading the OS completely into system RAM, without incurring much extra expense tied to purchasing more system RAM.


☞  

Secure computing system, especially for business purposes:

  • Any needed microchip hardware (including SD cards) should be obtained in ways so as to thwart MITM (man-in-the-middle) attacks targeted on the path between first supplier and end-user, where the targeting is based on targeting certain individuals.You need to pay particular attention to the obtaining of such hardware, because such hardware can often be maliciously reprogrammed for bad purposes, where it is difficult to know that the hardware is “clean” of such reprogramming (whether that be through verifying pre-existing cleanliness, or through attempts made to reset the hardware to a clean state). Other needed hardware can also be obtained in such ways for more security.

    Such obtaining might be by:

    • personally, physically, and randomly selecting units from many other like units at a physical store, whilst making sure no store staff are able to use ‘sleight-of-hand’ kinds of tricks to change the unit you’ve chosen to one that might be compromised;
    • purchasing units from cashier-less stores (like Amazon GO) where no human intervention can occur between unit selection and purchase;
    • making anonymous purchases where others including shop staff, cannot identify that the purchase is for you (this may involve making an anonymous Amazon purchase delivered to some transiently-used Amazon hub locker that doesn’t reveal to onlookers and Amazon staff, any significant association between the locker and you);
    •         AND/OR
    • purchasing many units of some component you need, where you only need one unit, perhaps purchasing the units at different times, and in different places, and then returning all but one unit, for full refunds, where the one unit kept is perhaps the unit most likely (in your analysis) to be the genuine article.

  • obtain a brand new Raspberry Pi computer.
  • The Raspberry Pi computer will have a battery pack, on which to run in a low-power state when it is not being used. This will ensure that the boot media is not needed every time a new computer session is started, because the OS will be loaded entirely into the system’s volatile RAM (using piCore), and when the computer is not used, it will simply be put to 'sleep'. The OS, as loaded into the system’s volatile RAM, should be encrypted, perhaps in a way that is a bit like doing full-disk encryption on a RAM drive stored in the system’s volatile RAM. Such a set-up should set the Pi device to behave as a tamper-evident mechanism, as if the computer is interfered with whilst in the low-power state, this will likely corrupt the state (contents of the system’s volatile RAM or otherwise), in a way that manifests itself as not being able to log-in to the powered-on system or the system being no longer in "sleep" mode but instead in a completely shutdown state. If the system state became corrupted, a "reboot" would perhaps be needed to 'wipe' the corrupt state clean, and such requirement would be some level of evidence of tampering. Encrypting the system’s volatile RAM data, is both to prevent adversaries from understanding the RAM data, as well as to create a more effective tamper-evidence system (the system shall perhaps more likely go into a state of being “no longer usable until it is rebooted”, if the state as held in the volatile RAM is kept in an encrypted form unintelligible to others). To ensure better that valuable data isn't stolen from the system’s volatile RAM, data and file deletion during the Pi device’s operation could take on the aspect of securely wiping (perhaps zeroing) the relevant memory addresses of the system’s volatile RAM in order to wipe securely traces of files and data (encrypting volatile RAM data may not be sufficient in the situation where adversaries are able to get hold of decryption keys, which could happen months or years after some adversary captures the system’s volatile RAM data) [it should be noted that data can sometimes persist in certain kinds of volatile RAM after shutdown]. Also, for extra security, the user perhaps should try not to leave data exposed during the Pi device’s sleep periods; this may mean deleting, just before putting the device to sleep, files that don’t need to be kept in the system’s volatile RAM during sleep periods.
  • have the Pi’s OS loaded either via a live DVD or an SD card; notes on this with some level of differentiation depending on whether a DVD or SD card is used, are detailed in the following table:
    OS loaded via live DVDOS loaded via SD card

    Live DVD is loaded using an external USB DVD drive.

    The live DVD should be read-only. To implement this properly, empty space on the DVD may need to be zeroed (blanked).

    Once per week, assuming no tampering is detected in the system, the constantly-on system would be used to reinstall the DVD drive's firmware. The firmware’s reinstallation files would not be stored on any removable media, but in the Pi device’s constantly-on encrypted volatile system RAM; it is believed such storage is likely to be very secure. Such reinstallation would help to ensure that the DVD drive remained a trusted device. In order to be able to do this, the DVD drive to be used must allow such reinstallation of the firmware, in a way that always wipes any malware that may have managed to get itself into the firmware (such infection should be rare, perhaps non existent, but could happen if adversaries were able to get physical access to the DVD drive).

    Not all the Pi models support booting in this way (booting using USB-connected boot media), so need to make sure the Pi model used will work with this method of booting.

    DVD drives take time to get spinning, and so it might be worth powering DVD drives from the mains (if possible), to overcome any issues related to any “USB mass storage” booting timeout on the used Pi device.

    It is probably a good idea to make sure the external DVD drive will really be able to be used with the Pi device, both in respect of there being enough power for the DVD drive if the drive is to be powered over the USB interface, and in respect of whether the model of drive can really be used for booting the particular Pi device model you plan to use.

    If using a live DVD, disabling the SD card slot may be a good idea in case of accidental usage of the slot—SD cards appear to have many security vulnerabilities.

    In the original design, a USB-connected DVD drive to boot the system was chosen instead of an SD card to boot the system. This was considered to be more secure. SD cards can be safely used if encrypted, where the SD card and any other individuals illicitly wirelessly connected to the SD card (using secret wireless-communications tech embedded in the SD card) are unable to modify the data in a fashion that results in the data in un-encrypted form being modified in specific ways [might easily be put into effect by keeping the encryption and decryption key(s) secret, as well as unknown to the SD card]. Unfortunately, the Rasp Pi device does not presently appear to be much set-up to be able to use such an SD card to boot the Pi device (could perhaps be implemented in the future by the use of some custom EEPROM [which could end-up being a good set-up], the security of the EEPROM is another thing to consider, though EEPROM generally appears to be more secure than SD cards [the EEPROM likely has no reprogrammable microcontroller, and no unused memory cells in large quantity]).

    Upon reconsideration of whether SD cards could be used, they could potentially be used if it were considered quite unlikely that some brand of SD card was always compromised right from the point of its first supply: by making strong use of the methods described in these notes for thwarting MITM attacks targeted on the path between first supplier and end-user, a trusted SD card could then probably be obtained. Such an SD card could then be used for booting the OS.

    The OS image would likely be downloaded to a cheap mobile device (eg. smartphone, tablet). The image would then be transferred to the boot media (whether that be to a live DVD via the external USB DVD drive connected to the mobile device, or to an SD card perhaps simply inserted into the mobile device’s external SD card slot).

    In order to be extra sure that the image is genuine, the following trick can be used:

    1. obtain the image in several ways (probably is a good idea that one of the ways does not make use of NAND flash technology [mobile-device downloading will likely use NAND flash technology due to the mobile device probably containing an internal SD card], NAND flash technology is perceived to be a particular point of attack);
    2. create a separate boot media (live DVD or bootable SD card) for each obtained image.
    3. load the OS using each boot media in turn;
    4. after each OS load (i.e. after each boot), make sure all the boot medias byte-for-byte match using the software loaded within that loaded OS session/boot (this means doing the same checks again and again, because we can’t be sure that any one loaded OS copy, has not been compromised);
    5. if every single check indicates the images on each of the boot media are the same, we can likely conclude that there is nothing to worry about: it’s unlikely that every single copy has been compromised, so whilst we can’t trust any one copy, we can trust the unanimous answer of all the copies together.

    Multiple backup copies of the boot media could be stored for safekeeping in different locations. This provides some capacity for making sure that no one has maliciously interfered with one of the boot media (by being able to do simple comparison of data on one boot media with one or more reference copies). Note that it is possible to tamper with SD cards in significant ways, such that such checking fails to detect such tampering. In contrast, it is likely impossible to tamper in any significant way, with DVDs that have been properly made read-only. Such DVDs can still be deceptively replaced with hoax fake DVDs though. However, the checking just described, should detect any such hoax replacement. To conclude this paragraph, booting the OS from read-only DVDs may be the preferred option so as to leverage potentially greater security.

  • the Pi device would be used to log-in to cloud-hosted DaaS (Desktop as a Service) services, meaning that the Pi's limited computing power would not really be much of a concern—the end-user’s Pi computer would act as a thin-client computer for the user. Such a thin-client set-up also helps with security in the sense that the end-user’s computer then becomes more “barebones” with the (client) software used probably being very trustworthy. There would then be a shift of security concerns from the end-user’s site to the DaaS provider—reliance would then be made on the DaaS provider’s security. The security of the DaaS—normally speaking—would be expected to be quite high; they would generally have much more resources for ensuring this.
  • such a set-up would rely on the security of the "TLS cryptographic security certificate" system. Because certain certification authorities would likely be less trusted than others, perhaps only a select few security certificates would be regarded as trusted and used for computing over the internet. In order to stay up-to-date with the certificates in the case of accidental power loss, new certificates would be saved to removable media, perhaps at the end of each day or each week. If using a mutli-session write-once DVD, then probably would be a good idea to record somewhere else, the location (in terms of tracks, sectors, etc.) on the DVD of the last write. That way, if an adversary made additional writes, it would be detected. The record could be kept on paper, in a safe, etc.
  • A USB cryptographic security-key token would be used in conjunction with a password, to log-in to the DaaS services. Passwords would be changed every now and then.
  • The method of firmware reinstallation mentioned above for reinstalling the DVD drive’s firmware so that the DVD drive can remain trusted, can be used on other devices and peripherals again in order to maintain trust in such devices/peripherals (such devices and peripherals might be the computer screen, mouse, keyboard, etc.)
  • The low cost and high availability of the Pi device is desirable. The high availability makes the methods described earlier concerning thwarting MITM attacks targeted on the path between first supplier and end-user, stronger and easier to employ. The low cost means that if it really is needed, a new unit can be bought as a replacement, probably in those cases where security may have been compromised in a significant way (perhaps every time your intrusion-detection system “goes off”, for example).
  • The non-integrated nature of the set-up (by, for example, not having the keyboard, screen, mouse, DVD drive, DVDs, and SD cards integrated into one unit), can improve security, it would seem (see here).
  • To be even more sure that the Raspberry Pi hardware has not undergone tampering, or been maliciously replaced with a deceptive fake, certain physical-property authentications can be made of the hardware. For example, using visual inspection, the device can be compared with downloaded photos of how it is supposed to look. Other measurements might be weight, X-ray images, etc.
  • The equipment would be locked-up when not in use, and other non-computer measures would be used for things like tamper evidence and prevention of illegitimate password capture (capture done perhaps by means of hidden cameras).
  • EEPROM on Rasp Pi devices is a point of attack, but adversaries need to have physical access, or somehow infect the system with malware-laden software, to get at it. The firmware can be write-protected with a software mod and hardware-config mod. Doing so will thwart software-based attacks. Physical access to the device should be adequately secured with the other details of the set-up. Encasing the EEPROM chips, jumpers, and input pins with removable transparent glue containing unrepeatable patterns, could provide particular tamper-evident security aimed specifically at protecting the firmware chips and firmware code.

    It appears there is no user-writable EEPROM on the Rasp Pi 3B+. Since the EEPROM is a point of attack, that if it were not used would seem to make the whole set-up generally more secure, perhaps using the 3B+ device instead of the 4 device might be a good idea. Probably, there is essentially no changeable firmware on the 3B+ model (instead firmware is permanently burnt into the SoC), and since firmware is considered a particular point of attack, using the 3B+ model may be a good idea. However, the firmware code permanently burnt into the 3B+ has known bugs. Admittedly, they are patched in the booting of the OS (normally off an inserted SD card) for newer versions of the boot file (`bootcode.bin`), but still such vulnerabilities might be significant.

    Risks (such as backdoors and malware) in the Pi device’s firmware (which could be due to the use of closed-source blobs in the firmware) can possibly be mitigated by disabling unneeded functionality through removal of code from the firmware, and by doing certain kinds of sandboxing whether at the firmware level, OS level, or the level of applications running over the OS.

  • It might be a good precautionary measure to RF shield the Pi device completely, and to connect to the internet via a USB-connected WiFi dongle or an ethernet connection. This could perhaps be achieved by placing the Pi device in a steel box, where there is a small cut-out hole for any USB dongle cable or ethernet cable. If the shielding worked, it would ensure that none of the components on the Pi device would be able to communicate wirelessly of their own accord, which could be a particular mode of attack/theft. Instead of a steel box, shielding could perhaps be implemented by the Pi device being water-proofed and then submerged in a water solution having high electrical conductivity. Not sure which would be cheaper: the water solution approach or a metal-box approach. Metal reflects RF signals and so can actually sometimes amplify wireless capabilities. So in such regard, the water-solution approach might be better. A water solution approach also potentially has the benefit of permitting visual inspection of the internals of the Pi device, which can make detection of physical tampering easier; this can then, as a consequence, also indirectly help with the prevention of such tampering (by making such attacks less attractive to adversaries).

See this Raspberry Pi forum topic for the original seeds of this set-up idea, that were eventually grown to become the greater detail of the idea (as present here); the set-up was developed somewhat within the just-mentioned Raspberry Pi forum topic.


☞  

Using a separate inexpensive but safe computer/device for doing internet things, where the main device has been stripped of communications capability (by removal of hardware) appears like a good security idea; it can be particularly good for budgetary reasons. Perhaps it is an extension of the "Having intermediate device for internet connection might be more secure?" idea already present in the talk pages of the book (see https://en.wikibooks.org/wiki/Talk:End-user_Computer_Security/Main_content/Wireless_Communications#Having_intermediate_device_for_internet_connection_might_be_more_secure%3F ). Internet tethering to main device may not be a good idea, because malware can then potentially piggy-back over internet connection so as to do much damage to the computing conducted on your main device; strong separation, as suggested by this idea, can overcome this. The internet device only needs to be able to do internet things required to be done by the user; all other things can instead probably be safely done on the non-internet main computing device (where greater computing resources are likely available) [so long as the main device isn't hooked-up at all to any communications potential, {such potential perhaps being through the internet}].


☞  

Trying to use a SIM-enabled smartphone/tablet/smartwatch as a conventional kind of laptop computer, by connecting it to a conventional kind of keyboard with trackpad, and an external screen, might be a good security set-up (a USB-C hub can probably be used for making such multiple connections, a "USB-C to HDMI" adapter or HDMI socket on the mobile device, might be required for external display). The Samsung Dex app can be used on "Samsung Dex"-capable smartphones, to give a desktop form-factor experience even though powered by a "mobile-phone form factor" device, with such a set-up--it's not just a magnification of the smartphone screen, but an adaption of the display so that it is suited to desktop screens/interfaces. Huwai have similar functionality with at least some of their smartphones, called desktop mode (see https://www.coolsmartphone.com/2018/08/08/huawei-desktop-mode-in-depth/).
For budgetary and security reasons, this set-up can be good because rather than having a netbook or other kind of laptop for the internet, as well as a mobile device (such as a mobile phone), you can instead just use one device for both use cases--such a system can be more barebones. Using a smartwatch might be particularly good in respect of it being more difficult to steal it when worn on your wrist; you could perhaps add a lock mechanism so that without a key or unscrewing a screw, it is difficult or impossible to remove the watch from your wrist; bear in mind though that a smartwatch at present, will likely be more expensive than a smartphone.
If using such a set-up, you could perhaps use a second-hand monitor, screen, or TV, if wanting to cut down on costs. The only thing about using a second-hand screen, is that hidden espionage technology might be in such hardware, however, so long as illicit screen capture is the only security threat, that level of security weakness might be acceptable and tolerable.
You can perhaps trim off some unneeded functionality (and become more barebones) by not having mobile SIM capability in the set-up; instead communications that would otherwise be done over a SIM network, can instead be done simply through the internet, such as by using Skype (Skype can be used to receive calls from, and make calls to, landlines, mobile phones, etc.) Skype uses end-to-end encryption at least for Skype-to-Skype calls, meaning spying on Skype-to-Skype calls is probably very unlikely; mobile networks might not do such complete encryption, probably spying on mobile calls is possible at call centres, and might be particularly worrying due to phone networks being sprawled over different countries; such differences can further incline one towards removing SIM functionality. A user may choose to use a SIM but only for mobile internet; in such cases, the end-to-end encryption offered by communications tech like Skype, over such mobile internet, should still retain its security benefits.
To be even more barebones and cheap, instead of connecting the single computing device to an external display, an optical screen magnifier (simply something like a magnifying glass) can be used to make the device's display appear large. You can also get projectors (like cinema projectors), that can optically project the device's display on a wall or something similar. Both projectors and magnifiers are quite cheap; they also probably never contain microprocessor tech. meaning the attack surface posed by such tech is perhaps unlikely to be there (you could probably buy such things over the internet, second-hand, and not worry about tampering attacks that are generally possible with computing devices).


☞  

Purchasing second-hand Blackberry Curve 9720 device for acting as a WiFi hotspot connected to the internet over mobile broadband is okay because of the nature of the security on such Blackberry devices? I managed to buy 3 (three) second-hand Blackberry Curve 9720 phones at a total cost of just £15 (for all three); one of the sellers appears to have been representing a very professional business heavily invested in mobile phone recycling and reuse. It also appears to be that there are plenty of such phones available at similar prices, in used-goods market on Facebook marketplace. Using a smartphone for the internet over a mobile SIM, appears to be a relatively cheap way to get the internet. If buying from private seller, probably have to make extra checks to make sure phone is genuinely Blackberry phone; could instead be a clone with weak security; it is theorised that second-hand Blackberry phones can be trusted because of Blackberry's high security, such security engenders trust in the use of the factory-reset function of their phones; the same doesn't seem much true with other brands of phone (see https://www.blackberry.com/us/en/products/secure-smartphones for more info). Some days have passed since my purchase of 3 (three) second-hand Blackberry Curve 9720 phones. I have discovered a flaw in this proposed method of securely acquiring phones. Whilst I still believe that this model of phone is good for going some way towards ensuring no tampering that can't be easily remedied through invoking the standard factory-reset function, I can't be certain that the phones in my possession are genuine Blackberry Curve 9720 phones—they could be good fakes. Not only do I not trust my suppliers, I also can't trust that the standard mailing of the items was not compromised along the way of their transit from the supplier to myself (the buyer). I had thought that there would be some mechanisms for easily ensuring that the phones were genuine, but none of the methods I've found through online research—which actually hasn't brought-up much information for dealing with such checking (which is perhaps a bit telling of itself)—appear to be particularly secure. Things like the IMEI number appear to be easy to fake; for example, an adversary can buy a genuine Blackberry Curve 9720, then copy the IMEI number to a deceptive fake Blackberry Curve 9720, and then simply keep the original true Curve phone out-of-service unused.

     MarkJFernandes (discusscontribs) 12:16, 30 November 2020 (UTC)


Reorder parts in Appendix so that this part comes first?

[edit source]

Such reordering would appear to make sense, as this part is closer to the content of the chapters of the book (such chapters make up the main body of the work).

     MarkJFernandes (discusscontribs) 18:08, 12 November 2020 (UTC)

  1. Perhaps one of the non-obvious reasons why it is easy to do, is that if malware is ever detected on a system, it may be hard to prove beyond doubt that it was due to a certain adversary and/or action; adversaries may say that perhaps the user just infected their own system through some downloading they did (for example)
  2. For OSes that aren't normally capable of being stored in such RAM, perhaps software can be run to establish a RAM drive using such RAM (something like a virtual drive held in RAM), and then the OS simply installed or copied to the RAM drive.
  3. a b c If you have no glue gun, you could try using a hairdryer instead (initial experiments indicate that using a hairdryer works).
  4. By volatile system RAM, is meant the system's volatile RAM having not separate powering, such system RAM being a historic feature of computer systems.
  5. See https://www.amazon.co.uk/gp/product/B000ERJDX4

This category currently contains no pages or media.