Event Notes: OWASP Dublin - Denial of Trust & Selfie Pay
In line with this years idea to expand my horizons and stay fresh, another event completed earlier this summer. This is following on from the VR/AR/3D event, the Digital Disruption and Project Management event, and the Amazon Web Services User Group event, averaging about one new talk every 1.5 months. Not bad, and hopefully I can sustain this average for the rest of the year! I've another Amazon Web Services event coming up soon - if anyone knows of others (on any topic, technical or non-technical, please let me know!).
In June 2018, it was the OSAWP (Open Web Application Security Project) and their talk with the title of 'Denial of Trust and Selfie-Pay'. For more details, on who and OSAWP is, I suggest checking out their main site, however to give an overview here's the intro lifted directly from the site:
Every vibrant technology marketplace needs an unbiased source of information on best practices as well as an active body advocating open standards. In the Application Security space, one of those groups is the Open Web Application Security Project (or OWASP for short).
The Open Web Application Security Project (OWASP) is a worldwide not-for-profit charitable organization focused on improving the security of software. Our mission is to make software security visible, so that individuals and organizations are able to make informed decisions. OWASP is in a unique position to provide impartial, practical information about AppSec to individuals, corporations, universities, government agencies and other organizations worldwide. Operating as a community of like-minded professionals, OWASP issues software tools and knowledge-based documentation on application security.
For myself, the first of the two talks was the most interesting in the end, although I went in with the least expectations of this one. A sign of a good speaker, as well as a surprisingly interesting topic! As always, here's the raw notes from the two talks, and my own observations at the end.
Denial of Trust: The New Attacks
Erosion of Trust: fake news, data misuse, breaches
Integrity Attacks: subtle or overt corruption of data.
Accusing the Breach: how to prove data hasn’t been corrupted.
Note: Do not say “blockchain” is the solution to everything.
Very hard to prove a negative. “the Intercept: impossible to prove your laptop hasn’t been hacked”. Article. Couldn’t tell if it was tampered or not.
Integrity Attacks: a lot less noticeable. Detecting the origin requires deep event logging and time references. Data may need to be re-validated. Most companies don't do deep logging, and in many, are even restricted from doing so.
Important to note that Fraud Department doesn’t care about non-financial data.
Businesses tend to be soloed without overview of cross-boundary data. (Accept data without questioning).
Observation from speaker: Thank Goodness for GDPR as making businesses look at what they have, that data is radioactive.
Dark Patterns in UI design: an ad that made it look like you had a smudge on screen. www.darkpatterns.org
Bait and switch
(E.g. Back button doesn’t work).
‘Intuitive’ UI abuse. We get habituated through icon or colour choices.
Creating anxiety through artificial scarcity or timeouts.
Even battery percentage is constantly in everyone’s background process.
It’s not hacking! Only designers will be able to spot the malice. “Engineering grade” UI’s.
We can’t trust ourselves.
Bias: a recommended book on the topic is Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Amazon UK link).
Belief there should be bug bounties to search for algorithmic Bias.
“Former security expert becomes developer and makes lots of security mistakes” (online article).
Subtlety of data corruption: David Satter (journalist) had some tainted emails leaked (emails were hacked, modified and then published). When he reviewed with his cyber security son, didn’t find all mistakes! Link to article on this story.
Defending against denial of trust Attacks: earn trust through: honesty, transparency, predictability, capability, willingness to correct mistakes (common problem with business in a breach - not willing to accept), accountability
Remember ‘trustworthy computing’ (from Microsoft)? The group was only focused on making computer impervious from Attacks, not verification/Bias. Blockchain doesn’t fix this either!
Is ‘zero trust’an answer. The short answer is: yes and no. Don’t trust something just because it’s on the inside of your firewall. Google has five white papers on how they pulled it off.
Trust = granting access without verifying. For some, trust = granting access BECAUSE you verified. What really means, it’s verifying you and your system.
Trust is neither binary nor permanent. What do you trust them to do? What conditions need to be true? For how long? (Depends on sensitivity, size of function)
E.g. speaker is allowed to buy booze but not electronics when abroad. How long can she be trusted for with this timeframe?
Question to consider: What happens to a community without trust? What happens to a society without trust?
Pranking people undermines trust.
Reinforcing trust: prevention (well understood processes), detection (visibility and control over changes), correction (ability to restore or adjust)
We need to do more of: continuous authentication, data validation, accountability
Not do this: not hash all things (i.e. blockchain, etc.)!
Trustworthy data and systems cannot be separated from trustworthy people.
We need to get away from distrust!
Tammy Hawkins, Vice President of Commercial Solutions software engineering at Mastercard.
Mastercard and Security
MasterCard is a Technology company, not a bank. Provide really fast rails. Expectation is Everything is a payment device (IoT, etc)
Global payment card fraud is $22.8billion annually and going up.
Target breach (affected 41 million customers): 3rd party contractor hacked through a janitorial unit! Suspicion is it was mafia.
All your data has a value to sell, even if don’t use your card. This includes Health Data that can be used to purchase medication, etc.
Tons of protections:
chip and pin
AI Fraud pattern detection
fraud rules: can automatically shut down transactions without human intervention if see specific trends.
The balance is with friction to use card. If too many protections, can't use the card and customers are unhappy.
Identity check learnings:
Identify and verification upfront. Has to be immediate in flow to get verified.
Liveness checks: certificate put on device which links card to this device.
Device scoring. Silent persistent authentication. More one device is used, higher it gets ranked.
Usage pattern/behaviour scoring (how quickly you move through fields, do you capitalise every first word, etc. )
Choice of authentication methods (e.g not wanting to use voice in a busy room, not taking a selfie at night when sleepy, etc). Fingerprint preferred mainly. Millennials loves selfies secondary.
Authentication match scoring: false acceptance rate (twins, etc.). False reject rate (dark room, awkward angle, colour of skin in poor light, etc.). Fine balance.
Storing biometrics templates locally vs centrally stored. MasterCard prefers not storing centrally. (Indian has legit reason to store centrally as monitoring for welfare fraud, etc. Needs to be done to verify multiple locations.)
Other super-secret stuff.......
Layered security is key
Android is a problem as allowed to root the device.
MasterCard doesn’t believe in passwords.
From my own part, I was listening to some great podcasts recently and it highlighted that the Internet was solely built for sharing, with no security in mind (remember that Vint Cerf, one of the founders, also had to fight to get the number of possible IP addresses increased from the original 1 million as no-one else could envision any more computing devices than that!). I've been intrigued to see what, and how, the privacy and security debate will also play out. In this digital era, and the first true era of abundance, how do you manage privacy and security when everything is connected? Additionally, 'the Internet isn't free and no-one is paying', and yet as we all know, nothing in life is free (so data is the currency for many services)?
Not to mention the impacts to society and how we manage governance when everyone is given a voice especially when there is a push from many quarters for severe privacy restrictions, and yet from the public perception, Most Facebook Users Don’t Expect Much Privacy..... Very interesting times.
"Trustworthy data and systems cannot be separated from trustworthy people."
The main highlight for me was the first aspect around the human element and manipulation of user trust. This isn't just a technical issue: it involves people. Like it or not, it's responsibility of everyone to be part of the system. Interestingly, this also aligns with my belief around child safety, etc.: that it's not just a case of restricting access to nefarious parts of the Internet: it involves training the individuals using those services at the same time. Aligning with all the emphasis on privacy at present due to legislation like GDPR going live, there's a lot to be considered here.
I've been thinking a lot about the topic of privacy and the Internet recently also - a topic for a separate post......