A review of links posted on Twitter associated with the infosec crowd. Compiled by @jnazario. Mostly automatically generated, with some hand edits to explain some links. Top 25 links (or so) presented in order of most popular to least popular.
Hello and welcome to Internet.Org, a site dedicated to communicate and support freedom and openness by and for the Internet.
Charlie Miller (left) and Chris Valasek hacking into a Jeep Cherokee from Millers basement as I drove the SUV on a highway ten miles away. Miller and Valasek plan to publish a portion of their exploit on the Internet, timed to a talk theyre giving at the Black Hat security conference in Las Vegas next month. WIRED has learned that senators Ed Markey and Richard Blumenthal plan to introduce an automotive security bill today to set new digital security standards for cars and trucks, first sparked when Markey took note of Miller and Valaseks work in 2013. The attack tools Miller and Valasek developed can remotely trigger more than the dashboard and transmission tricks they used against me on the highway. And thanks to one vulnerable element, which Miller and Valasek wont identify until their Black Hat talk, Uconnects cellular connection also lets anyone who knows the cars IP address gain access from anywhere in the country. Miller and Valasek say the attack on the entertainment system seems to work on any Chrysler vehicle with Uconnect from late 2013, all of 2014, and early 2015. Theyve only tested their full set of physical hacks, including ones targeting transmission and braking systems, on a Jeep Cherokee, though they believe that most of their attacks could be tweaked to work on any Chrysler vehicle with the vulnerable Uconnect head unit. hackers following in their footsteps will have to reverse-engineer that element, a process that took Miller and Valasek months. On July 16, owners of vehicles with the Uconnect feature were notified of the patch in a post on Chryslers website that didnt offer any details or acknowledge Miller and Valaseks research. In the fall of 2012, Miller, a security researcher for Twitter and a former NSA hacker, and Valasek, the director of vehicle security research at the consultancy IOActive, were inspired by the UCSD and University of Washington study to apply for a car-hacking research grant from Darpa. When WIRED told Infiniti that at least one of Miller and Valaseks warnings had been borne out, the company responded in a statement that its engineers look forward to the findings of this [new] study and will continue to integrate security features into our vehicles to protect against cyberattacks. It wasnt until June that Valasek issued a command from his laptop in Pittsburgh and turned on the windshield wipers of the Jeep in Millers St. Louis driveway.
In its key harvesting trial operations in the first quarter of 2010, GCHQ successfully intercepted keys used by wireless network providers in Iran, Afghanistan, Yemen, India, Serbia, Iceland and Tajikistan. But, the agency noted, its automated key harvesting system failed to produce results against Pakistani networks, denoted as priority targets in the document, despite the fact that GCHQ had a store of Kis from two providers in the country, Mobilink and Telenor. At one point in March 2010, GCHQ intercepted nearly 100,000 keys for mobile phone users in Somalia. Another goal was to intercept private communications of employees in Poland that could lead to penetration into one or more personalisation centers the factories where the encryption keys are burned onto SIM cards. If those governments had the encryption keys for major U.S. cell phone companies customers, such as those manufactured by Gemalto, mass snooping would be simple. Rather than using the same encryption key to protect years worth of data, as the permanent Kis on SIM cards can, a new key might be generated each minute, hour or day, and then promptly destroyed. Because cell phone communications do not utilize PFS, if an intelligence agency has been passively intercepting someones communications for a year and later acquires the permanent encryption key, it can go back and decrypt all of those communications. If mobile phone networks were using PFS, that would not be possible even if the permanent keys were later stolen.
We want to notify our community that on Friday, our team discovered and blocked suspicious activity on our network. In our investigation, we have found no evidence that encrypted user vault data was taken, nor that LastPass user accounts were accessed. The investigation has shown, however, that LastPass account email addresses, password reminders, server per user salts, and authentication hashes were compromised.
Taken together, the accomplishments led Kaspersky researchers to conclude that Equation Group is probably the most sophisticated computer attack group in the world, with technical skill and resources that rival the groups that developed Stuxnet and theFlame espionage malware. "Every now and then they share them with the Stuxnet group and the Flame group, but they are originally available only to the Equation Group people. (Stuxnet, according to The New York Times, was a joint operation between the NSA and Israel, while Flame, according to The Washington Post, was devised by the NSA, the CIA, and the Israeli military.)
KnitYak: custom mathematical knit scarvesKnitYak makes custom provably unique knit scarves and other computationally fabulous knit items with mathematical algorithms. Four years ago I bought my first consumer knitting machine, modified it, and fell in love with knitting algorithmic computer generated designs. To find algorithms that produce great knit patterns, I set out on a journey to find code that created images that look great "pixelly" as a knit. Knitting is made up of tiny "v"'s, not square pixels, so that also played into the choice of algorithms.One of the algorithms I ended up loving was an elementary cellular automaton that generates great knit patterns which are non-repeating in some lengths, and yet not noise. Perhaps you want to knit matching items on your modified consumer knitting machine or with hand knitting. Your KnitYak scarf is knit using black and white cones of this USA produced premium quality merino wool.The other aspect of quality in these knits is that using a top of the line industrial knitting machine here in the USA means no hand work is required. As I love hand knitting, I wanted to have my free time not fulfilling orders not to be taken up by hemming, casting off by hand or any hand linking of knit edges. The machine doing all the knitting and finishing frees me up to pack and ship all of your lovely custom orders and to hunt down and create fun new algorithms for knitting patterns. Jacquard is a misnomer in that a jacquard loom weaves and a knitting machine knits. In hand knitting terminology, this knitting technique is called double knitting. The ultimate goal is to have the opposite colors knit on the back from the front, as with true double knitting in hand knitting parlance. This may very well be possible with this run of Kickstartered scarves, but they will have at least one beautiful patterned side, as with consumer knitting machines.There is very little to no waste produced when knitting two color scarves in double bed jacquard. Knitting can be unravelled back to yarn that you can use to knit something else. The high quality merino KnitYak uses won't wear out or pill and will be reusable far in the future as reclaimed yarn.Elementary Cellular Automata
The most underrated, underhyped vulnerability of 2015 has recently come to my attention, and I’m about to bring it to yours. No one gave it a fancy name, there were no press releases, nobody called Mandiant to come put out the fires. In fact, even though proof of concept code was released OVER 9 MONTHS AGO, none of the products mentioned in the title of this post have been patched, along with many more. In fact no patch is available for the Java library containing the vulnerability. In addition to any commercial products that are vulnerable, this also affects many custom applications.
The Apple research is consistent with a much broader secret U.S. government program to analyze secure communications products, both foreign and domestic in order to develop exploitation capabilities against the authentication and encryption schemes, according to the 2013 Congressional Budget Justification. But, they promised, their presentation could provide the intelligence community with a method to noninvasively extract encryption keys used on Apple devices. In a talk called Strawhorse: Attacking the MacOS and iOS Software Development Kit, a presenter from Sandia Labs described a successful whacking of Apples Xcode the software used to create apps for iPhones, iPads and Mac computers. By quietly exploiting these flaws rather than notifying Apple, the U.S. government leaves Apples customers vulnerable to other sophisticated governments. Over the years, as Apple updates its hardware, software and encryption methods, the CIA and its researchers study ways to break and exploit them. U.S. intelligence agencies are not just focusing on individual terrorists or criminals they are targeting the large corporations, such as Apple, that produce popular mobile devices. By the end of 2013, according to the budget, the project would develop new capabilities against 50 commercial information security device products to exploit emerging technologies, as well as new methods that would allow spies to recover user and device passwords on new products. A second key, the Group ID, is known to Apple and is the same across multiple Apple devices that use the same processor. As Apple designs new processors and faster devices that use those processors, the company creates new GIDs.If someone has the same iPhone as her neighbor, they have the exact same GID key on their devices. Taken together, the documents make clear that researching each new Apple processor and mobile device, and studying them for potential security flaws, is a priority for the CIA. Those keys have value get one and you get power over every single Apple device with the same processor, which is why Apple goes to such great lengths to make sure those keys arent easily accessible. The encryption technology that Apple has built into its products along with many other security features is a virtual wall that separates cybercriminals and foreign governments from customer data. But now, because Apple claims it can no longer extract customer data stored on iPhones, because it is encrypted with a key the company does not know, the U.S. government can be locked out too even with a search warrant.
The report, issued by Germanys Federal Office for Information Security (or BSI), indicates the attackers gained access to the steel mill throughthe plants business network, then successively worked their way into production networks to access systems controlling plant equipment. Once the attackers got a foothold on one system, they were able to explore the companys networks, eventually compromising a multitude of systems, including industrial components on the production network. Although a network can only be considered truly air-gapped if its not connected to the internet and is not connected to other systems that are connected to the internet, many companies believe that a software firewall separating the business and production network is sufficient to stop hackers from making that leap.
In a report to be published on Monday, and provided in advance to The New York Times, Kaspersky Lab says that the scope of this attack on more than 100 banks and other financial institutions in 30 nations could make it one of the largest bank thefts ever — and one conducted without the usual signs of robbery.
Thats all well and good, is appropriate customer due diligence and stops well short of hey, I think I will do the vendors job for him/her/it and look for problems in source code myself, even though: A customer cant analyze the code to see whether there is a control that prevents the attack the scanning tool is screaming about (which is most likely a false positive) A customer cant produce a patch for the problem only the vendor can do that A customer is almost certainly violating the license agreement by using a tool that does static analysis (which operates against source code) I should state at the outset that in some cases I think the customers doing reverse engineering are not always aware of what is happening because the actual work is being done by a consultant, who runs a tool that reverse engineers the code, gets a big fat printout, drops it on the customer, who then sends it to us. More like, I do not need you to analyze the code since we already do that, its our job to do that, we are pretty good at it, we can unlike a third party or a tool actually analyze the code to determine whats happening and at any rate most of these tools have a close to 100% false positive rate so please do not waste our time on reporting little green men in our code. The Oracle license agreement limits what you can do with the as-shipped code and that limitation includes the fact that you arent allowed to de-compile, dis-assemble, de-obfuscate or otherwise try to get source code back from executable code. Customers are welcome to use tools that operate on executable code but that do not reverse engineer code. That fact still doesnt justify a customer reverse engineering our code to attempt to find vulnerabilities, especially when the key to whether a suspected vulnerability is an actual vulnerability is the capability to analyze the actual source code, which frankly hardly any third party will be able to do, another reason not to accept random scan reports that resulted from reverse engineering at face value, as if we needed one. Bug bounties are the new boy band (nicely alliterative, no?) Many companies are screaming, fainting, and throwing underwear at security researchers**** to find problems in their code and insisting that This Is The Way, Walk In It: if you are not doing bug bounties, your code isnt secure. We ask that customers not reverse engineer our code to find suspected security issues: we have source code, we run tools against the source code (as well as against executable code), its actually our job to do that, we dont need or want a customer or random third party to reverse engineer our code to find security vulnerabilities.
At a hacker conference in August 1997, Mudge whose real name is Peiter Zatko and who infused a zeal for showmanship into L0pht visibly delighted in tweaking Microsoft as he described cracking the password security on Windows, at the time the standard operating system for business and government computers worldwide. L0pht sold T-shirts bearing its logo at conferences and also began selling its tool for cracking Windows passwords called L0pht Crack for $50 to system administrators eager to test the strength of passwords on the networks they managed. They also noticed how a rising generation of security consultants including some doing stress testing using tactics much like L0phts were getting big paydays. The hackers boasts about being able to take down the Internet in 30minutes by exploiting flaws in a key Internet routing protocol called BGP prompted mentions from Conan OBrien and Rush Limbaugh, who called them long-haired nerd computer hackers. The hackers had joined @Stake, a security company built largely on L0phts fame and $10million in venture-capital funding. Wysopal and a fellow L0pht hacker, Dildog, founded the security company Veracode in 2006. The chief executive of @Stake, who was brought in to provide something like parental supervision to L0pht, ordered Wysopal to single out a member of the group for a layoff in order to balance out cuts elsewhere in the company. (He eventually settled with enough to buy a car, cover his lawyers fees and put a down payment on a condominium, he says.)
Initial commit of the BinNavi source code, refactored to be ready for
The FREAK attack was originally discovered by Karthikeyan Bhargavan at INRIA in Paris and the mitLS team. However, instead of simply excluding RSA export cipher suites, we encourage administrators to disable support for all known insecure ciphers (e.g., there are export cipher suites protocols beyond RSA) and enable forward secrecy.
However, if X and Y point to different rows in the same bank, code1a will cause X and Ys rows to be repeatedly activated. Flushing the cache using CLFLUSH forces the memory accesses to be sent to the underlying DRAM, which is necessary to cause the rows to be repeatedly activated. Whereas a normal 4k page is smaller than a typical DRAM row, a 2MB page will typically cover multiple rows, some of which will be in the same bank. For machines where random selection is already sufficient to cause bit flips, double-sided hammering can lead to a vastly increased number of bits flipped. We did this by observing the likelihood of bit flips relative to the distance the selected physical memory pages had from the victim page. NaCls approach of validating machine code is particularly vulnerable to bit flips, because:
That wasn't the case for the YouTube information, however;Twitter user pent0thal confirmed that account'sdisplayed password was "lemotdepassedeyoutube," which translates in English to "the password of YouTube." However, the same Twitter user found another screengrab of a completely different news segment about TV5Monde's hacking ordealwhich also contained a post-it note with what appeared to be a staffer's general-use username and password.
After a robust discussion on our community mailing list, Mozilla is committing to focus new development efforts on the secure web, and start removing capabilities from the non-secure web. There are two broad elements of this plan:
While egghunts werent new, this was a new flavor of shellcode for netapi32 exploits and clear evidence of a successful exploit. I was going in reverse: examining an exploit to determine the vulnerability, armed with only a forensic crash no way to reproduce it. Here was my dilemma: if I could not find the vulnerability, despite having a clear exploit, we could not act. That is, finding all the relevant crashes that represent this issue on every version of Windows, every SKU, every possible way the exploit could fail. Normally this invalid input would case the buggy code to crash, but the exploit authors figured out a way around this problem. In a quirk of fate, the Windows RPC thread pool handed the second request containing the exploit to a different threadone that did not have the carefully placed slash character. My interpretation is that the attack had succeeded and was downloading the payload, but the attacker got impatient or goofed and ran the exploit a second time. Attackers dont hesitate to download the patch, diff it, and start building exploits, defenders caught on their back foot may be at a disadvantage as they scramble to rearrange their schedule to deploy the update. I soon saw crashes from people rediscovering the vulnerability, from vulnerability scanners probing for vulnerable systems, and soon enough, from botnet exploits.
These stories of being threatened are common throughout the tight-knit community of high-profile cybersecurity researchers, but few are willing to share them openly.If you are engaged in tracking cybercriminals, in research, you have to be really careful about your surroundings, your family, the people around you, said Righard Zwienenberg, ESET security expert. People doing this kind of research take the risk knowingly and willingly.Enemies on all sidesWhile this secretive lifestyle might be alluring to some, most cybersecurity researchers are, by nature, geeks. Attacks may also happen simply for the lulz, as hackers often want to challenge or amuse themselves.Threats can include subtle pressure, patriotic enlistment, bribery, compromise and blackmail, legal repercussions, threat to livelihood, threat to viability of life in the actors area of influence, threat of force, or elimination,depending on the attacker, cybersecurity expert Juan Andres Guerrero-Saade wrote in a paper presented this fall at the Virus Bulletin Conference in Prague.The scariest threats may come from governments, he said.The researcher as a private individual faces unique challenges when in cross-hairs of a nation-state actor, he wrote. Russian security company Dr. Web, whose products are used by banking, telecom and oil companies, was bombed with Molotov cocktails after its researchers revealed that a gang created and sold a Trojan able to steal money from ATMs.Dr. Web received a threat by email saying the company had a week to delete all references to the group posted online. In the spring of 2013, police received a phony emergency call from his house, a practice known as swatting, or calling in a false 911 emergency that will bring a SWAT (Special Weapons Attack Team) force to a victims home.Cybersecurity researchers know how to guard themselves in the digital world.
Juniper provided guidance on what the logs from a successful intrusion would look like:2015-12-17 09:00:00 system warn 00515 Admin user system has logged on via SSH from ..2015-12-17 09:00:00 system warn 00528 SSH: Password authentication successful for admin user username2 at host Although an attacker could delete the logs once they gain access, any logs sent to a centralized logging server (or SIEM) would be captured, and could be used to trigger an alert.FoxIT has a created a set of Snort rules that can detect access with the backdoor password over Telnet and fire on any connection to a ScreenOS Telnet or SSH service:# Signatures to detect successful abuse of the Juniper backdoor password over telnet.# Additionally a signature for detecting world reachable ScreenOS devices over SSH. rev:2;)alert tcp any any - $HOME_NET 23 (msg:"FOX-SRT - Backdoor - Juniper ScreenOS telnet backdoor password attempt"; rev:2;)alert tcp $HOME_NET 23 - any any (msg:"FOX-SRT - Backdoor - Juniper ScreenOS successful logon";
In some cases, the surveillance agenciesare obtaining the content of emails by monitoring hackers as they breach emailaccounts, often without notifying the hacking victims of these breaches.Hackers are stealing the emails of some of our targets by collecting the hackers take, we . Government Communications Headquarters agency and NSAshed new light on the various means used by intelligence agencies to exploit hackers successes and learn from their skills, while also raising questions about whether governments have overstated the threat posed by some hackers.
How is NSA breaking so much crypto? The Snowden documents also hint at some extraordinary capabilities: they show that NSA has built extensive infrastructure to intercept and decrypt VPN traffic and suggest that the agency can decrypt at least some HTTPS and SSH connections on demand. For the most common strength of Diffie-Hellman (1024 bits), it would cost a few hundred million dollars to build a machine, based on special purpose hardware, that would be able to crack one Diffie-Hellman prime every year. Breaking a single, common 1024-bit prime would allow NSA to passively decrypt connections to two-thirds of VPNs and a quarter of all SSH servers globally. For instance, the Snowden documents show that NSAs VPN decryption infrastructure involves intercepting encrypted connections and passing certain data to supercomputers, which return the key. While the documents make it clear that NSA uses other attack techniques, like software and hardware implants, to break crypto on specific targets, these dont explain the ability to passively eavesdrop on VPN traffic at a large scale.