Quantcast
Channel: Hacker News
Viewing all 737 articles
Browse latest View live

Microsoft rescues XP users with emergency browser fix

$
0
0

BOSTON - Microsoft is helping the estimated hundreds of millions of customers still running Windows XP, which it stopped supporting earlier this month, by providing an emergency update to fix a critical bug in its Internet Explorer browser.

Microsoft Corp rushed to create the fix after learning of the bug in the operating system over the weekend when cybersecurity firm FireEye Inc warned that a sophisticated group of hackers had exploited the bug to launch attacks in a campaign dubbed "Operation Clandestine Fox.

It was the first high-profile threat to emerge after Microsoft stopped providing support to its 13-year-old XP operating software on April 8.

Microsoft on Wednesday initially said it would not provide the remedy to Windows XP users because it had stopped supporting the product. But on Thursday, as Microsoft started releasing the fix for the bug through its automated Windows Update system, a company spokeswoman said the remedy also would be pushed out to XP customers.

"We decided to fix it, fix it fast, and fix it for all our customers," spokeswoman Adrienne Hall said on Microsoft's official blog.

She said there had not been many attacks exploiting the vulnerability, which Microsoft decided to patch in XP "based on the proximity" to its recent end of support.

"There have been a very small number of attacks based on this particular vulnerability and concerns were, frankly, overblown," she said in the blog.

At the end of last week, FireEye initially uncovered attacks involving recent versions of Windows that are still supported by Microsoft.

Then, three days ago, it began identifying attacks on Windows XP, which users would not necessarily have been able to thwart if Microsoft had not decided to roll out the update to XP users in addition to other customers.

FireEye said in a blog published on Thursday that it had observed new groups of hackers exploiting the vulnerability to attack targets in government and energy sectors, in addition to previously identified financial and defense industries.

Microsoft was under pressure to move quickly as the U.S., UK and German governments advised computer users on Monday to consider using alternatives to Microsoft's Explorer browser until it released a fix.

Microsoft first had warned that it was planning to end support for Windows XP in 2007, but security firms estimated that 15 to 25 percent of the world's personal computers still run on the version of the operating system that was released in October 2001.

(Reporting by Jim Finkle; Editing by Jeffrey Benkoe and Leslie Adler)


Gimli Glider

$
0
0
Air Canada Flight 143

Flight 143 after landing at Gimli, Manitoba.

Accident summary
DateJuly 23, 1983
SummaryFuel exhaustion due to maintenance error
SiteEmergency landing at Gimli Industrial Park Airport, Gimli, Manitoba
50°37′44″N97°02′38″W / 50.62889°N 97.04389°W / 50.62889; -97.04389Coordinates: 50°37′44″N97°02′38″W / 50.62889°N 97.04389°W / 50.62889; -97.04389
Passengers61
Crew8
Injuries (non-fatal)10
Fatalities0
Survivors69 (all)
Aircraft typeBoeing 767-233
OperatorAir Canada
RegistrationC-GAUN
Flight originMontreal-Dorval International Airport
DestinationEdmonton International Airport

The Gimli Glider is the nickname of an Air Canada aircraft that was involved in an unusual aviation incident. On July 23, 1983, Air Canada Flight 143, a Boeing 767-233 jet, ran out of fuel at an altitude of 41,000 feet (12,000 m) MSL, about halfway through its flight originating in Montreal to Edmonton. The crew were able to glide the aircraft safely to an emergency landing at Gimli Industrial Park Airport, a former Royal Canadian Air Force base in Gimli, Manitoba.[1]

The subsequent investigation revealed company failures and a chain of human errors that combined to defeat built-in safeguards. Fuel loading was miscalculated due to a misunderstanding of the recently adopted metric system which replaced the imperial system.

History[edit]

On July 22, 1983, Air Canada's Boeing 767 (registration C-GAUN, c/n 22520/47)[2] flew from Toronto to Edmonton where it underwent routine checks. The next day, it was flown to Montreal. Following a crew change, it departed Montreal as Flight 143 for the return trip to Edmonton, with Captain Robert (Bob) Pearson, 48, and First Officer Maurice Quintal at the controls.

Running out of fuel[edit]

On July 23, 1983, flight 143 was cruising at 41,000 ft., over Red Lake, Ontario. The aircraft's cockpit warning system sounded, indicating a fuel pressure problem on the aircraft's left side. Assuming a fuel pump had failed[3] the pilots turned it off,[3] since gravity should feed fuel to the aircraft's two engines. The aircraft's fuel gauges were inoperative because of an electronic fault which was indicated on the instrument panel and airplane logs (the pilots believed flight to be legal with this malfunction). The flight management computer indicated that there was still sufficient fuel for the flight; but the initial fuel load had been measured in pounds instead of kilograms. A few moments later, a second fuel pressure alarm sounded for the right engine, prompting the pilots to divert to Winnipeg. Within seconds, the left engine failed and they began preparing for a single-engine landing.

As they communicated their intentions to controllers in Winnipeg and tried to restart the left engine, the cockpit warning system sounded again with the "all engines out" sound, a long "bong" that no one in the cockpit could recall having heard before and that was not covered in flight simulator training.[3] Flying with all engines out was something that was never expected to occur and had therefore never been covered in training.[4] Seconds later, with the right-side engine also stopped, the 767 lost all power, and most of the instrument panels in the cockpit went blank.

The 767 was one of the first airliners to include an Electronic Flight Instrument System (EFIS), which operated on the electricity generated by the aircraft's jet engines. With both engines stopped, the system went dead, leaving only a few basic battery-powered emergency flight instruments. While these provided sufficient information with which to land the aircraft, a vertical speed indicator—that would indicate the rate at which the aircraft was descending and therefore how long it could glide unpowered—was not among them.

On airliners the size of the 767, the engines also supply power for the hydraulic systems without which the aircraft cannot be controlled. Such aircraft are therefore required to accommodate this kind of power failure. As with the 767, this is usually achieved through the automated deployment of a ram air turbine, a hydraulic pump (and on some airplanes a generator) driven by a small propeller, which in turn is driven by the forward motion of the aircraft through the air. As the Gimli pilots were to experience on their landing approach, a decrease in this forward speed means a decrease in the power available to control the aircraft.

Landing at Gimli[edit]

In line with their planned diversion to Winnipeg, the pilots were already descending through 35,000 feet (11,000 m)[2] when the second engine shut down. They immediately searched their emergency checklist for the section on flying the aircraft with both engines out, only to find that no such section existed.[3] Captain Pearson was an experienced glider pilot, which gave him familiarity with flying techniques almost never used by commercial pilots. To have the maximum range and therefore the largest choice of possible landing sites, he needed to fly the 767 at the "best glide speed". Making his best guess as to this speed for the 767, he flew the aircraft at 220 knots (410 km/h; 250 mph). First Officer Maurice Quintal began to calculate whether they could reach Winnipeg. He used the altitude from one of the mechanical backup instruments, while the distance traveled was supplied by the air traffic controllers in Winnipeg, measuring the distance the aircraft's echo moved on their radar screens. The aircraft lost 5,000 feet (1,500 m) in 10 nautical miles (19 km; 12 mi), giving a glide ratio of approximately 12:1.

At this point, Quintal proposed landing at the former RCAF Station Gimli, a closed air force base where he had once served as a Royal Canadian Air Force pilot. Unknown to him, part of the facility had been converted to a race track complex, now known as Gimli Motorsports Park.[5] It includes a road race course, a go-kart track, and a dragstrip. A Canadian Automobile Sport Clubs-sanctioned sports car race hosted by the Winnipeg Sports Car Club was under way the Saturday of the incident and the area around the decommissioned runway was full of cars and campers. Part of the decommissioned runway was being used to stage the race.[6]

Without power, the pilots attempted lowering the aircraft's main landing gear via a gravity drop. The main gear locked into position, but the nose wheel was unable to do so, which later turned out to be advantageous to the situation. As the aircraft slowed on approach to landing, the ram air turbine generated less power, rendering the aircraft increasingly difficult to control.

As the runway drew near, it became apparent that the aircraft was too high and fast, raising the danger of running off the runway before the aircraft could be stopped. The lack of hydraulic pressure prevented flap/slat extension which would have, under normal landing conditions, reduced the stall speed of the aircraft and increased the lift coefficient of the wings allowing the aircraft to be slowed for a safe landing. The pilots briefly considered executing a 360-degree turn to reduce speed and altitude, but decided that they did not have enough altitude for the maneuver. Pearson decided to execute a forward slip to increase drag and lose altitude. This maneuver is commonly used with gliders and light aircraft to descend more quickly without increasing the already-too-fast forward speed.

As soon as the wheels touched the runway, Pearson "stood on the brakes", blowing out two of the aircraft's tires. The unlocked nose wheel collapsed and was forced back into its well, causing the aircraft's nose to slam into, bounce off, and then scrape along the ground. The collapsed nose wheel helped to slow the airplane and prevent collateral damage to the people on the ground. The nose also grazed into the guardrail now dividing the strip, which further slowed it down.[3]

None of the 61 passengers were seriously hurt. A minor fire in the nose area was extinguished by racers and course workers armed with fire extinguishers. As the aircraft's nose had collapsed onto the ground, its tail was elevated and there were some minor injuries when passengers exited the aircraft via the rear slides which were not long enough to accommodate the increased height.

Investigation[edit]

An Air Canada investigation concluded that the pilots and mechanics were at fault, although the Aviation Safety Board of Canada (predecessor of the modern Transportation Safety Board of Canada) found the airline at fault.

The safety board reported that Air Canada management was responsible for "corporate and equipment deficiencies." The report praised the flight and cabin crews for their "professionalism and skill."[4] It noted that Air Canada "neglected to assign clearly and specifically the responsibility for calculating the fuel load in an abnormal situation,"[4] finding that the airline had failed to reallocate the task of checking fuel load that had been the responsibility of the flight engineer on older (three-crew) aircraft. The safety board also said that Air Canada needed to keep more spare parts, including replacements for the defective fuel quantity indicator, in its maintenance inventory, as well as provide better, adequate training on the metric system to its pilots and fuelling personnel.

A final report of the investigation was published in 1985: Final report of the Board of Inquiry investigating the circumstances of an accident involving the Air Canada Boeing 767 aircraft C-GAUN that effected an emergency landing at Gimli, Manitoba on the 23rd day of July, 1983 / Commissioner, George H. Lockwood -- [Ottawa] : Government of Canada, 1985. -- vi, 199 p. ; 28 cm. -- ISBN 066011884X

Fuel quantity indicator system[edit]

The amount of fuel in the tanks of a Boeing 767 is computed by the Fuel Quantity Indicator System (FQIS) and displayed in the cockpit. The FQIS on the incident aircraft was a dual-processor channel, each calculating the fuel independently and cross-checking with the other. In the event of one failing the other could still operate alone, but under these circumstances the indicated quantity was required to be cross-checked against a floatstick measurement before departure. In the event of both channels failing there would be no fuel display in the cockpit, and the aircraft would be considered unserviceable and not authorized to fly.

Because inconsistencies were found with the FQIS in other 767s, Boeing had issued a service bulletin for the routine checking of this system. An engineer in Edmonton duly did so when the aircraft arrived from Toronto following a trouble-free flight the day before the incident. While conducting this check, the FQIS failed and the cockpit fuel gauges went blank. The engineer had encountered the same problem earlier in the month when this same aircraft had arrived from Toronto with an FQIS fault. He found then that disabling the second channel by pulling the circuit breaker in the cockpit restored the fuel gauges to working order albeit with only the single FQIS channel operative. In the absence of any spares he simply repeated this temporary fix by pulling and tagging the circuit breaker.

A record of all actions and findings was made in the maintenance log, including the entry; "SERVICE CHK – FOUND FUEL QTY IND BLANK – FUEL QTY #2 C/B PULLED & TAGGED...".[7] This reports that the fuel gauges were blank and that the second FQIS channel was disabled, but does not make clear that the latter fixed the former.

On the day of the incident, the aircraft flew from Edmonton to Montreal. Before departure the engineer informed the pilot of the problem and confirmed that the tanks would have to be verified with a floatstick. In a misunderstanding, the pilot believed that the aircraft had been flown with the fault from Toronto the previous afternoon. That flight proceeded uneventfully with fuel gauges operating correctly on the single channel.

On arrival at Montreal, there was a crew change for the return flight back to Edmonton. The outgoing pilot informed Captain Pearson and First Officer Quintal of the problem with the FQIS and passed along his mistaken belief that the aircraft had flown the previous day with this problem. In a further misunderstanding, Captain Pearson believed that he was also being told that the FQIS had been completely unserviceable since then.

While the aircraft was being prepared for its return to Edmonton, a maintenance worker decided to investigate the problem with the faulty FQIS. To test the system he re-enabled the second channel, at which point the fuel gauges in the cockpit went blank. He was called away to perform a floatstick measurement of fuel remaining in the tanks. Distracted, he failed to disable the second channel, leaving the circuit breaker tagged (which masked the fact that it was no longer pulled). The FQIS was now completely unserviceable and the fuel gauges were blank.

On entering the cockpit, Captain Pearson saw what he was expecting to see: blank fuel gauges and a tagged circuit breaker. He consulted the aircraft's Minimum Equipment List (MEL), which told him that the aircraft could not be flown in this condition. The 767 was still a very new aircraft, having flown its maiden flight in September 1981. C-GAUN was the 47th Boeing 767 off the production line, delivered to Air Canada less than 4 months previously.[8] In that time there had been 55 changes to the MEL, and some pages were still blank pending development of procedures.

Due to this unreliability, it had become procedure for flights to be authorized by maintenance personnel. To add to his own misconceptions about the condition the aircraft had been flying in since the previous day, reinforced by what he saw in the cockpit, he now had a signed-off maintenance log that it had become custom to prefer above the Minimum Equipment List.

Refueling[edit]

At the time of the incident, Canada was converting to the metric system. As part of this process, the new 767s being acquired by Air Canada were the first to be calibrated for metric units (litres and kilograms) instead of customary units (gallons and pounds). All other aircraft were still operating with Imperial units (gallons and pounds). For the trip to Edmonton, the pilot calculated a fuel requirement of 22,300 kilograms (49,200 lb). A dripstick check indicated that there were 7,682 litres (1,690 imp gal; 2,029 US gal) already in the tanks. To calculate how much more fuel had to be added, the crew needed to convert the quantity in the tanks to a weight, subtract that figure from 22,300 kg and convert the result back into a volume. In previous times, this task would have been completed by a flight engineer, but the 767 was the first of a new generation of airliners that flew only with a pilot and co-pilot, and without a flight engineer.

The volume of a kilogram of jet fuel varies with temperature. In this case, the weight of a litre (known as "specific gravity") was 0.803 kg, so the correct calculation was:

7682 L × 0.803  kg/L = 6169 kg
22300 kg − 6169 kg = 16131 kg
16131 kg ÷ (0.803 kg/L) = 20088 L of fuel to be transferred

Between the ground crew and pilots, they arrived at an incorrect conversion factor of 1.77, the weight of a litre of fuel in pounds. This was the conversion factor provided on the refueller's paperwork and which had always been used for the airline's imperial-calibrated fleet. Their calculation produced:

7682 L × 1.77 kg/L = 13597 kg
22300 kg − 13597 kg = 8703 kg
8703 kg ÷ (1.77 kg/L) = 4916 L of fuel to be transferred

Instead of 22,300 kg of fuel, they had 22,300 pounds on board — 10,100 kg, about half the amount required to reach their destination. Knowing the problems with the FQIS, Captain Pearson double-checked their calculations but was given the same incorrect conversion factor and inevitably came up with the same erroneous figures.

The Flight Management Computer (FMC) measures fuel consumption, allowing the crew to keep track of fuel burned as the flight progresses. It is normally updated automatically by the FQIS, but in the absence of this facility it can be updated manually. Believing he had 22,300 kg of fuel on board, this is the figure the captain entered.

Because the FMC would reset during the stopover in Ottawa, the captain had the fuel tanks measured again with the dipstick while there. In converting the quantity to kilograms, the same incorrect conversion factor was used, leading him to believe he now had 20,400 kg of fuel; in reality, he had less than half that amount.

Aftermath[edit]

Following Air Canada's internal investigation, Captain Pearson was demoted for six months, and First Officer Quintal was suspended for two weeks. Three maintenance workers were also suspended.[9] In 1985 the pilots were awarded the first ever Fédération Aéronautique Internationale Diploma for Outstanding Airmanship.[10] Several attempts by other crews who were given the same circumstances in a simulator at Vancouver resulted in crashes.[11] Quintal was promoted to captain in 1989, and Pearson retired in 1993.[12]

The aircraft was temporarily repaired at Gimli and flew out two days later to be fully repaired at a maintenance base in Winnipeg. Following the successful appeal of their suspensions, Pearson and Quintal were assigned as crew members aboard another Air Canada flight. As they boarded the aircraft, and realized that it was the same one that was involved in the Gimli incident, they lightly joked about not repeating the performance. After almost 25 years of service, the airplane flew its last revenue flight on January 1, 2008. Air Canada still uses the flight number 143, but the route is now Montreal–Ottawa–Edmonton, or St. John's–Halifax–Ottawa–Edmonton (depending on season) using an Embraer 190 aircraft.[13]

Retirement[edit]

Gimli Glider parked at Mojave Airport & Spaceport in February 2008 (C-GAUN Air Canada livery was subsequently removed)

On January 24, 2008, the Gimli Glider took its final voyage, AC7067, from Montreal Trudeau to Tucson International Airport before its retirement in the Mojave Desert.[12] An Air Canada newsletter "The Daily" states:[14]

The Gimli Glider retires to the desert. On Thursday, 24 January, fin 604, the Boeing 767-200 better known as the Gimli Glider, will undertake its final voyage from Montreal to Mojave Airport (MHV) before it is retired to the desert. Employees and retirees (bring valid employee ID) are invited to come and say goodbye to the aircraft, which has now become part of Canadian aviation history. Fin 604 is set to depart as flight AC7067, at 9:00 a.m. from the Montreal Line Maintenance hangar - Air Canada Base, 750 Côte Vertu West; Building 7, Bay 8/13 (West end), Gate entrance 5. Captain Robert Pearson and First Officer Maurice Quintal, the flight crew who landed the aircraft to safety in Gimli on 23 July 1983 are expected to be on hand for the aircraft's departure. The hangar will be open to well-wishers from 8:00 a.m.

Flight AC7067 was captained by Jean-Marc Bélanger, a former head of the Air Canada Pilots Association, while captains Robert Pearson and Maurice Quintal were on board to oversee the flight from Montreal to California's Mojave Airport. Also on board were three of the six flight attendants who were on Flight 143.[3][12]

Flight tracking services FlightAware and FlightView indicated on January 24, 2008 that 604's initial flight was from Montreal (CYUL) to Tucson International Airport (KTUS), having a planned cruise altitude of FL400. According to FlightAware, 604 landed at 12:53 p.m. (MST) at Tucson International Airport (KTUS). The Gimli Glider was then scheduled to depart Tucson and make the final flight to the Mojave Airport (KMHV) for retirement, but was delayed.

On the 25th anniversary of the incident in 2008, pilots Pearson and Quintal were celebrated in a parade in Gimli, and a mural was dedicated to commemorate the landing.[15]

In April 2013 the Gimli Glider was offered for sale at auction with an estimated price of 2.75–3 million CAD.[16] However, bidding only reached 425,000 CAD and the lot was unsold.[17]

See also[edit]

References[edit]

  1. ^Witkin, Richard (July 30, 1983). . The New York Times. Retrieved 2007-08-21. "Air Canada said yesterday that its Boeing 767 jet ran out of fuel in mid-flight last week because of two mistakes in figuring the fuel supply of the airline's first aircraft to use metric measurements. After both engines lost their power, the pilots made what is now thought to be the first successful emergency "dead stick" landing of a commercial jetliner." 
  2. ^ abAccident description aviation-safety.net(Accessed 2008-07-24)
  3. ^ abcdefNelson, Wade H. (October 1997). . WadeNelson.com (Originally published in Soaring Magazine). Retrieved 2013-11-09. 
  4. ^ abcWilliams, Merran (July–August 2003). . Flight Safety Australia: 27. Retrieved 2013-02-20. 
  5. ^Gimli Motorsports Park website
  6. ^Red River PCA website
  7. ^Stewart, Stanley (1992). Emergency, Crisis on the Flightdeck. Airlife Publishing Ltd. p. 123. ISBN 1-85310-348-9. 
  8. ^' C-GAUN manufacture date". http://www.planespotters.net. Retrieved 2007-06-04. 
  9. ^'Gimli glider' recalled at trial of pilot in crash". CBC. 2007. Retrieved 2013-11-09. 
  10. ^. Retrieved 2007-06-05. 
  11. ^TV program Air Crash Investigation National Geographic Channel
  12. ^ abcJang, Brent (March 13, 2009). . The Globe and Mail. Retrieved 2013-11-09. .
  13. ^ (pdf). Air Canada. October 23, 2009. p. 48. Retrieved 2009-10-29. 
  14. ^"The Gimli Glider retires to the desert"Air Canada: The Daily(internal employee newsletter), January 22, 2008
  15. ^. CBC News. July 23, 2008. Retrieved 2013-11-09. 
  16. ^Winnipeg Free Press (February 22, 2013). . TheStar.com. Retrieved 2013-06-26. 
  17. ^'Gimli Glider' not sold at Ontario auction". CBC News. April 14, 2013. Retrieved 2013-06-26. 

Further reading[edit]

  • Emergency, Crisis on the Flight Deck, Stanley Stewart, Airlife Publishing Ltd., 1992, ISBN 1-85310-348-9
  • Freefall: From 41,000 feet to zero - a true story, William and Marilyn Hoffer, Simon & Schuster, 1989 ISBN 978-0-671-69689-4
  • Engineering Disasters - Lessons to be Learned, Don Lawson, ASME Press, 2005, ISBN 0-7918-0230-2. Pages 221-9 deal specifically with Gimli Glider.

External links[edit]

An Opinionated Guide to Modern Java Development, Part 1

$
0
0
May 01, 2014

More working, useful code has been written in the Java programming language than in any other in history, with the possible exceptions of C and COBOL. When Java was released almost 20 years ago, it took the software world by storm. It was a simpler, safer, alternative to C++, and some time later its performance caught up, too (depending on the exact usage, a large Java program can be slightly slower, as fast, or a little faster than a comparable C++ codebase). It offered truly tremendous productivity benefits over C++, while sacrificing very little (if anything at all) in return.

Java is a blue-collar language– the working person’s trusty tool – adopting only tried and true idioms, and adding features only if they solve major pain points. Whether or not Java has stayed true to its mission or not is an open question, but it certainly tries not to let current fashions sway it too far off course. Java has been used to write code for anything from smart-cards, through embedded devices, and all the way to mainframes. It is even being used to write mission- and safety-critical hard realtime software.

And yet, in recent years, the Java programming language has gained some noteriety as well, especially among web startups. Java is verbose relative to languages like Ruby or Python, and its web frameworks used to require extensive amounts of XML configuration, especially when compared to configuration-free frameworks like Rails. In addition, Java’s widespread use in large enterprise companies led to the adoption of programming patterns and practices that might have a place in a very large programming team working for a company with extensive bureaucracy, but do not belong in a fast-moving-things-breaking startup.

But all the while, Java has changed. The language recently acquired lambda expression and traits; libraries provide it with true lightweight threads – just like Erlang’s and Go’s. And, most importantly, a more modern, lightweight approach now guides API, library and framework design, replacing all the old heavyweight, XML-laden ones.

Another thing has happened in the Java ecosystem in the past few years: a bunch of good implementations of alternative languages for the JVM have started gaining popularity; some of those languages are quite good (my personal favorites are Clojure and Kotlin). But even with those languages as viable (and sometimes recommended) options, Java does have several advantage over other JVM languages, among them: familiarity, support, maturity, and community. With modern tools and modern libraries, Java actually has a lot going for it. It is not surprising, thenrefore, that many Silicon Valley startups, once they grow a bit, come back to Java, or, at the very least – to the JVM.

This opinionated, introductory guide is intended for the Java programmer (all 9 million of them) who wants to learn how to write modern, lean Java, or for the Python/Ruby/Javascript programmer who’s heard (or may have experienced) bad things about Java and is curious to see how things have changed and how they can get Java’s awesome performance, flexibility and monitoring without sacrificing too much coolness.

The JVM

For those unfamiliar with Java terminology, Java is conceptually made of three parts: Java, the programming language, the Java runtime libraries, and the Java Virtual Machine, or JVM. If you’re familiar with Node.js, Java the language is analogous to JavaScript, the runtime libraries are analogous to Node.js itself, and the JVM would be analogous to V8. The JVM and runtime libraries are packaged together into what is known as the Java Runtime Environment, or the JRE (although often when people say “JVM” they actually mean the entire JRE). The Java Development Kit, or the JDK, is a version of the JRE that includes development tools like the javac, the Java compiler, and various monitoring and profiling tools. The JRE comes in several flavors, like those made for embedded devices, but in this blog post series, we will only be referring to the JRE made for server (or desktop) machines, known as Java SE (Standard Edition).

There are quite a few implementations of the JVM (or the JRE) – some are open-source and some are commercial. Some are highly specific: for example, there are JVMs for hard-realtime embedded software, and those made for huge RAM sizes (in the hundreds of gigabytes). But we will be using HotSpot, the free, “common” JVM implementation made by Oracle, which is also available as part of the open-source OpenJDK.

Java was built for the JVM, and the JVM was built for Java (recently, though, the JVM has undergone some modifications specifically with other programming languages in mind). But what is the JVM? This talk by Cliff Click explains what the JVM does, but put simply, the JVM is an abstraction-implementation magic machine. It takes nice, simple, and useful abstractions, like infinite memory and polymorphism – which sound costly to implement – and implements them so efficiently that they can easily compete with runtimes that don’t provide these useful abstractions. More specifically, the JVM has the best garbage collection implementations in widespread production use, and its JIT allows it to inline and optimize virtual method calls (which are at the core of the most useful abstractions in most programming languages), making them extremely cheap while preserving all of their usefulness. The JVM’s JIT (Just-In-Time compiler) is basically a highly advanced profile guidedoptimizing compiler running concurrently with your application.

The JVM also hides many of the idiosyncracies of the underlying hardware/OS platform, like the memory model (how and when code running on different CPU cores sees changes to variables made on other cores) and access to timers. It also offers dynamic runtime-linking of all code, hot code swapping, and monitoring of pretty much everything that’s going on in the JVM itself, and in the Java libraries.

That is not to say that the JVM is perfect. Right now its missing the possibility to embed complex structs inside arrays (this is scheduled to be resolved in Java 9), and proper tail-call optimization. Nevertheless, the JVM is so mature, well-tested, fast, flexible, and allows for such detailed runtime profiling and monitoring, that I wouldn’t consider running a critical, non-trivial server process on anything else.

But enough theory. Before we go any further, you should download and install the latest JDKhere, or, if you prefer, use your OS’s package manager to install a recent version of the OpenJDK.

The Build

We will start our tour of modern Java with the build tool. Java has had several build tools over its longish history (Ant, Maven), and yes, most of them were based on XML. But the modern Java developer uses Gradle (which has recently become Android’s official build tool). Gradle is a mature, heavily developed, modern Java build tool, that uses a DSL built on top of the Groovy language to specify the build process. It combines the simplicity of Maven with the power and flexibility of Ant, while throwing away all that XML. But Gradle is not without its faults: while it makes the most common things easy and declaratives, there are quite a few things that are quite, though not very, common, but still require dropping down to imperative Groovy.

So let’s create a new modern Java project with Gradle. First, we’ll download Gradle here, and install it. Now, we’ll create our project, which we shall call JModern, by first creating the jmodern directory, changing into that directory, and running

gradle init --type java-library

Gradle creates a skeleton project, with some stub classes (Library.java and LibraryTest.java) which we will need to delete:

Gradle init directory structure

Our source code goes into src/main/java/ while our test code goes in src/test/java/. Let’s call our main class jmodern.Main (so its source file is src/main/java/jmodern/Main.java), which for now will be a variation of Hello World, but in order to have some fun with Gradle, we will use Google’s Guava library as well. Use your favorite text editor to create src/main/java/jmodern/Main.java, which will initially consist of this code:

packagejmodern;importcom.google.common.base.Strings;publicclassMain{publicstaticvoidmain(String[]args){System.out.println(triple("Hello World!"));System.out.println("My name is "+System.getProperty("jmodern.name"));}staticStringtriple(Stringstr){returnStrings.repeat(str,3);}}

Let’s also create a small unit-test suite in src/test/java/jmodern/MainTest.java:

packagejmodern;importstaticorg.hamcrest.CoreMatchers.*;importstaticorg.junit.Assert.*;importorg.junit.Test;publicclassMainTest{@TestpublicvoidtestTriple(){assertThat(Main.triple("AB"),equalTo("ABABAB"));}}

Now, we’ll modify build.gradle in the main project directory to be:

applyplugin:'java'applyplugin:'application'sourceCompatibility='1.8'mainClassName='jmodern.Main'repositories{mavenCentral()}dependencies{compile'com.google.guava:guava:17.0'testCompile'junit:junit:4.11'// A dependency for a test framework.}run{systemProperty'jmodern.name','Jack'}

The build file sets jmodern.Main as the main class, it declares Guava as a dependency, and sets the value of the jmodern.name system property, which we read in our program. When we run:

gradle run

Gradle will download Guava from Maven Central, compile our program, and run it with Guava on the classpath, and jmodern.name set to "Jack". That’s it.

Now, for kicks, let’s run the unit tests:

gradle build

The test report, now found in build/reports/tests/index.html, looks like this:

Gradle test report

The IDE

Some people say that IDEs are there to hide problems with the programming language. Well, I don’t have an opinion about that, but having a good IDE always helps, regardless of the programming language you’re using, and Java’s got the best around. While the choice of an IDE is not as important as anything else in this article, of the “big three” Java IDEs: Eclipse, IntelliJ IDEA, and NetBeans, you should really use either IntelliJ or NetBeans. IntelliJ is probably the most powerful of the three, while NetBeans is the most intuitive and easiest to get started with (and, in my opinion, the best looking). Also, NetBeans has the best Gradle support thanks to the Gradle plugin (which can be installed by going to Tools -> Plugins -> Available Plugins). Eclipse is (still?) the most popular of the three. I abandoned it some years ago, and from what I hear it’s become kind of a mess, but if you’re a long-time Eclipse user and are happy with it, that’s OK, too.

Here’s how our little project looks in NetBeans, after installing the Gradle plugin:

NetBeans

What I like best about NetBeans’ Gradle support is that the IDE takes not only the project dependencies from the build file, but all other configurations as well, so we only need to specify them once – in the build file. If you’re adding new dependencies to the build file while the project is open in NetBeans, you’ll want to right-click the project and select “Reload Project”, so that NetBeans can download the dependencies. If you then right-click the “Dependencies” node of the project in the IDE and choose “Download Sources”, NetBeans will download the dependencies’ source code and Javadoc, so you can step into the third-party library code in the debugger, or see API documentation as you type.

Documenting Your Code in Markdown

Java has long had really good API documentation with Javadoc, and Java developers are accustomed to writing Javadoc comments. But the modern Java developer likes Markdown, and would like to write spice up their Javadoc with Markdown. To do that, we will use the Pegdown Doclet project (a Doclet is a Javadoc plugin) by making the following additions to our build file: Before the dependencies section, we will add

configurations{markdownDoclet}

and we’ll add this line to dependencies:

markdownDoclet'ch.raffael.pegdown-doclet:pegdown-doclet:1.1.1'

Finally, put this somewhere in the build file:

javadoc.options{docletpath=configurations.markdownDoclet.files.asType(List)// gradle should relly make this simplerdoclet="ch.raffael.doclets.pegdown.PegdownDoclet"addStringOption("parse-timeout","10")}

Now we can use Markdown in our Javadoc comments, complete with syntax highlighting!

You might want to turn off your IDE’s comment formatting (in Netbeans: Preferences -> Editor -> Formatting, choose Java and Comments, and uncheck Enable Comments Formatting). IntelliJ has a plugin that renders our Markdown Javadocs in the IDE.

To test our setup, let’s add a fancy Markdown Javadoc to the randomString method:

/** * ## The Random String Generator *  * This method doesn't do much, except for generating a random string. It: *  *  * Generates a random string at a given length, `length` *  * Uses only characters in the range given by `from` and `to`. *  * Example: *  * ```java * randomString(new Random(), 'a', 'z', 10); * ``` *  * @param r      the random number generator * @param from   the first character in the character range, inclusive * @param to     the last character in the character range, inclusive * @param length the length of the generated string * @return the generated string of length `length` */publicstaticStringrandomString(Randomr,charfrom,charto,intlength)...

Then, generate the javadocs with gradle javadoc, which will put the html files in build/docs/javadoc/. Our doc will look like this:

Markdown Javadoc

I don’t use markdown in comments often, as they don’t render well in IDEs. But this dies make life much easier when you want to include code examples in your Javadoc.

Write Succinct Code with Java 8

The recent release of Java brought the biggest change to the language since its original release with the addition of lambda expressions. Lambda expressions (with type inference) address one of the biggest issues people have had with the Java language, namely unjustified verbosity when doing simple stuff. To see how much lambda expressions help, I’ve taken the most infuriatingly verbose, simple data manipulation example I could think of, and wrote it in Java 8. It generates a list of random “student names” (just random strings), groups them by their first letter, and prints out a nicely formatted student directory. So now let’s run our program after changing our Main class to this:

packagejmodern;importjava.util.List;importjava.util.Map;importjava.util.Random;importstaticjava.util.stream.Collectors.*;importstaticjava.util.stream.IntStream.range;publicclassMain{publicstaticvoidmain(String[]args){// generate a list of 100 random namesList<String>students=range(0,100).mapToObj(i->randomString(newRandom(),'A','Z',10)).collect(toList());// sort names and group by the first letterMap<Character,List<String>>directory=students.stream().sorted().collect(groupingBy(name->name.charAt(0)));// print a nicely-formatted student directorydirectory.forEach((letter,names)->System.out.println(letter+"\n\t"+names.stream().collect(joining("\n\t"))));}publicstaticStringrandomString(Randomr,charfrom,charto,intlength){returnr.ints(from,to+1).limit(length).collect(()->newStringBuffer(),(sb,c)->sb.append((char)c),(sb1,sb2)->sb1.append(sb2)).toString();}}

Java infers the types of all lambdas’ arguments, but everything is still type safe, and if you’re using an IDE, you’ll get autocomplete and refactoring for all type-inferred variables. Java does not infer types for local variables (like the auto keyword in C++ or var in C# or Go) because that would arguably hurt code readability. But that doesn’t mean you have to manually type the types (heh). For example, type Alt+Enter in NetBeans on this line: students.stream().sorted().collect(Collectors.groupingBy(name -> name.charAt(0))) and the IDE will assign the result to a local variable of the appropriate type (in this case, Map<Character, String>).

If we wanted to go a little crazier with the functional style, we could write the main method like so:

publicstaticvoidmain(String[]args){range(0,100).mapToObj(i->randomString(newRandom(),'A','Z',10)).sorted().collect(groupingBy(name->name.charAt(0))).forEach((letter,names)->System.out.println(letter+"\n\t"+names.stream().collect(joining("\n\t"))));}

Not your father’s Java, indeed (look ma, no types!), but I would say that taking this too far would certainly go against the spirit of the language.

Even though Java has lambdas, it doesn’t have function types. Instead, lambda expressions are eventually converted to an appropriate functional interface, namely an interface with a single abstract method. This automatically makes a lot of legacy code work beautifully with lambdas. For example, the Arrays.sort method has always taken an instance of the Comparator interface, which simply specifies the single abstract int compare(T o1, T o2) method. In Java 8, a lambda expression can be used to sort an array of strings according to their third character:

Arrays.sort(array,(a,b)->a.charAt(2)-b.charAt(2));

Java 8 also added the ability to include method implementations in interfaces (which turns them into what is known as “traits”). For example, the FooBar interface below contains two methods, one abstract (foo) and the other (bar) with a default implementation. The useFooBar method, well, uses a FooBar:

interfaceFooBar{intfoo(intx);defaultbooleanbar(intx){returntrue;}}intuseFooBar(intx,FooBarfb){returnfb.bar(x)?fb.foo(x):-1;}

Even though FooBar has two methods, only one of them (foo) is abstract, so it is still a functional interface, and can be created with a lambda expression. For example, the call:

will return 9.

Simple Lightweight Concurrency with Fibers

For people like me, who are interested in concurrent data structures, the JVM is paradise. On the one hand, it gives you low-level access to the CPU’s concurrency primitives like CAS instructions and memory fences, while on the other it gives you a platform-neutral memory model combined with world-class garbage collectors; the combination is everything you want when building high-performance concurrent data structures. But for those who use concurrency not because they want to but becuase they have to in order to scale their software – namely, everyone else – the Java’s concurrency story is problematic.

True, Java was designed for concurrency from the get-go, and places a lot of emphasis on its concurrency constructs in every release. It’s got state-of-the-art implementations of very useful concurrent data structures (like ConcurrentHashMap, ConcurrentSkipListMap, and ConcurrentLinkedQueue) – not even Erlang and Go have those – and is usually 5 years or more ahead of C++ when it comes to concurrency, but using all this stuff correctly and efficiently is pretty damn hard. First we had threads and locks, and those worked fine for a while, until we needed more concurrency and that approach didn’t scale very well. Then we had thread pools and events: those scale quite well, but can even be harder to reason about, especially in a language that does not protect against racy mutation of shared state. Besides, if your problem is that kernel threads don’t scale well, then asynchronous handling of events is a bad idea. Why not simply fix threads? That’s precisely the approach taken by Erlang and (much later) Go: lightweight, user-mode threads. Those allow mapping domain concurrency (like number of concurrent users) directly to program concurrency (lightweight threads). They allow for a simple, familiar, blocking programming style without sacrificing scalability, and for efficient use of synchronization constructs simpler than locks and semaphores.

Quasar is an open-source library made by us, that adds true lightweight threads to the JVM (in Quasar they’re called fibers), where they can work naturally alongside plain (OS) threads. Quasar also has CSP mechanisms just like Go’s, and a very Erlang-like actor system. Fibers are certainly the modern developer’s weapon of choice when it comes to concurrency. They are simple, elegant and very performant. Let’s play with them for a bit.

First, we’ll setup the build. Merge the following into build.gradle:

configurations{quasar}dependencies{compile"co.paralleluniverse:quasar-core:0.5.0:jdk8"quasar"co.paralleluniverse:quasar-core:0.5.0:jdk8"}run{jvmArgs"-javaagent:${configurations.quasar.iterator().next()}"// gradle should make this simpler, too}

This will be our new Main.java (if you’re using NetBeans, you’ll want to right-click the project and select “Reload Project” after adding the new dependencies):

packagejmodern;importco.paralleluniverse.fibers.Fiber;importco.paralleluniverse.strands.Strand;importco.paralleluniverse.strands.channels.Channel;importco.paralleluniverse.strands.channels.Channels;publicclassMain{publicstaticvoidmain(String[]args)throwsException{finalChannel<Integer>ch=Channels.newChannel(0);newFiber<Void>(()->{for(inti=0;i<10;i++){Strand.sleep(100);ch.send(i);}ch.close();}).start();newFiber<Void>(()->{Integerx;while((x=ch.receive())!=null)System.out.println("--> "+x);}).start().join();// join waits for this fiber to finish}}

We now have two fibers communicating via a channel.

Strand.sleep, and all of the Strand class’s methods, work equally well whether we run our code in a fiber or a plain Java thread. Let’s now replace the first fiber with a plain (heavyweight) thread:

newThread(Strand.toRunnable(()->{for(inti=0;i<10;i++){Strand.sleep(100);ch.send(i);}ch.close();})).start();

and this works just as well (of course, we could have millions of fibers running in our app, but only up to a few thousand threads).

Now, let’s try channel selection (which mimics Go’s select statement):

packagejmodern;importco.paralleluniverse.fibers.Fiber;importco.paralleluniverse.strands.Strand;importco.paralleluniverse.strands.channels.Channel;importco.paralleluniverse.strands.channels.Channels;importco.paralleluniverse.strands.channels.SelectAction;importstaticco.paralleluniverse.strands.channels.Selector.*;publicclassMain{publicstaticvoidmain(String[]args)throwsException{finalChannel<Integer>ch1=Channels.newChannel(0);finalChannel<String>ch2=Channels.newChannel(0);newFiber<Void>(()->{for(inti=0;i<10;i++){Strand.sleep(100);ch1.send(i);}ch1.close();}).start();newFiber<Void>(()->{for(inti=0;i<10;i++){Strand.sleep(130);ch2.send(Character.toString((char)('a'+i)));}ch1.close();}).start();newFiber<Void>(()->{for(inti=0;i<10;i++){SelectAction<Object>sa=select(receive(ch1),receive(ch2));switch(sa.index()){case0:System.out.println(sa.message()!=null?"Got a number: "+(int)sa.message():"ch1 closed");break;case1:System.out.println(sa.message()!=null?"Got a string: "+(String)sa.message():"ch2 closed");break;}}}).start().join();// join waits for this fiber to finish}}

Starting with Quasar 0.6.0 (in development), you can use lambda expressions directly in the select statement (to try this at home, you’ll need to change Quasar’s version in the build file from 0.5.0 to 0.6.0-SNAPSHOT and add maven { url "https://oss.sonatype.org/content/repositories/snapshots" } to the repositories section), so the code running in the last fiber could also be written so:

for(inti=0;i<10;i++){select(receive(ch1,x->System.out.println(x!=null?"Got a number: "+x:"ch1 closed")),receive(ch2,x->System.out.println(x!=null?"Got a string: "+x:"ch2 closed")));}

Now let’s try some high-performance IO with fibers:

packagejmodern;importco.paralleluniverse.fibers.*;importco.paralleluniverse.fibers.io.*;importjava.io.IOException;importjava.net.InetSocketAddress;importjava.nio.*;importjava.nio.charset.*;publicclassMain{staticfinalintPORT=1234;staticfinalCharsetcharset=Charset.forName("UTF-8");publicstaticvoidmain(String[]args)throwsException{newFiber(()->{try{System.out.println("Starting server");FiberServerSocketChannelsocket=FiberServerSocketChannel.open().bind(newInetSocketAddress(PORT));for(;;){FiberSocketChannelch=socket.accept();newFiber(()->{try{ByteBufferbuf=ByteBuffer.allocateDirect(1024);intn=ch.read(buf);Stringresponse="HTTP/1.0 200 OK\r\nDate: Fri, 31 Dec 1999 23:59:59 GMT\r\n"+"Content-Type: text/html\r\nContent-Length: 0\r\n\r\n";n=ch.write(charset.newEncoder().encode(CharBuffer.wrap(response)));ch.close();}catch(IOExceptione){e.printStackTrace();}}).start();}}catch(IOExceptione){e.printStackTrace();}}).start();System.out.println("started");Thread.sleep(Long.MAX_VALUE);}}

What have we done here? First, we launch a fiber that will loop forever, accepting TCP connection attempts. For each accepted connection, it spawns another fiber that reads the request, sends a response and then terminates. While this code is blocking on IO calls, under the covers it uses async EPoll-based IO, so it will scale as well as any async IO server (we’ve greatly improved IO performance in Quasar 0.6.0-SNAPSHOT).

But enough writing Go in Java. Let’s try Erlang.

Fault-Tolerant Actors and Hot Code Swapping

The actor model, (semi-)popularized by the Erlang language, is intended for the writing of fault-tolerant, highly maintainable applications. It breaks the application into independent fault-containment units – actors – and formalizes the handling of and recovery from errors.

Before we start playing with actors, we’ll need to add this dependency to the dependencies section in the build file: compile "co.paralleluniverse:quasar-actors:0.5.0".

Now let’s rewrite our Main class yet again, this time the code is more complicated as we want our app to be fault tolerant:

packagejmodern;importco.paralleluniverse.actors.*;importco.paralleluniverse.fibers.*;importco.paralleluniverse.strands.Strand;importjava.util.Objects;importjava.util.concurrent.ThreadLocalRandom;importjava.util.concurrent.TimeUnit;publicclassMain{publicstaticvoidmain(String[]args)throwsException{newNaiveActor("naive").spawn();Strand.sleep(Long.MAX_VALUE);}staticclassBadActorextendsBasicActor<String,Void>{privateintcount;@OverrideprotectedVoiddoRun()throwsInterruptedException,SuspendExecution{System.out.println("(re)starting actor");for(;;){Stringm=receive(300,TimeUnit.MILLISECONDS);if(m!=null)System.out.println("Got a message: "+m);System.out.println("I am but a lowly actor that sometimes fails: - "+(count++));if(ThreadLocalRandom.current().nextInt(30)==0)thrownewRuntimeException("darn");checkCodeSwap();// this is a convenient time for a code swap}}}staticclassNaiveActorextendsBasicActor<Void,Void>{privateActorRef<String>myBadActor;publicNaiveActor(Stringname){super(name);}@OverrideprotectedVoiddoRun()throwsInterruptedException,SuspendExecution{spawnBadActor();intcount=0;for(;;){receive(500,TimeUnit.MILLISECONDS);myBadActor.send("hi from "+self()+" number "+(count++));}}privatevoidspawnBadActor(){myBadActor=newBadActor().spawn();watch(myBadActor);}@OverrideprotectedVoidhandleLifecycleMessage(LifecycleMessagem){if(minstanceofExitMessage&&Objects.equals(((ExitMessage)m).getActor(),myBadActor)){System.out.println("My bad actor has just died of '"+((ExitMessage)m).getCause()+"'. Restarting.");spawnBadActor();}returnsuper.handleLifecycleMessage(m);}}}

Here we have a NaiveActor spawning an instance of a BadActor, which occasionally fails. Because our naive actor watches its protege, it will be notified of its untimely death, and re-spawn a new one.

In this example, Java is rather annoying, especially when it comes to testing the type of a message with instanceof and casting objects from one type to another. This is much better done in Clojure or Kotlin (I’ll post a Kotlin actor example one day), with their pattern matching. So, yes, all this type-checking and casting is certainly bothersome, and if this type of code encourages you to give Kotlin a try – you should certainly go for it (I have, and I like Kotlin a lot, but it has to mature before it’s fit for use in production). Personally, I find this annoyance rather minimal.

But let’s get back to substance. A crucial component of actor-based fault-tolerant systems, is reducing downtime not only caused by application erros, but also by maintenance. We will explore the JVM’s manageabity in part 2 of this guide, but now we’ll play with actor hot code swapping.

There are several ways to perform actor hot code swapping (e.g. via JMX, which we’ll learn about in part 2), but now we’ll do it by monitoring the file system. First, create a subdirectory under the project’s directory, which we’ll call modules. Then add the following line to build.gradle’s run section:

systemProperty"co.paralleluniverse.actors.moduleDir","${rootProject.projectDir}/modules"

Now, in a terminal window, start the program (gradle run, remember?). While the program is running, let’s go back to the editor, and modify our BadActor class a bit:

@UpgradestaticclassBadActorextendsBasicActor<String,Void>{privateintcount;@OverrideprotectedVoiddoRun()throwsInterruptedException,SuspendExecution{System.out.println("(re)starting actor");for(;;){Stringm=receive(300,TimeUnit.MILLISECONDS);if(m!=null)System.out.println("Got a message: "+m);System.out.println("I am a lowly, but improved, actor that still sometimes fails: - "+(count++));if(ThreadLocalRandom.current().nextInt(100)==0)thrownewRuntimeException("darn");checkCodeSwap();// this is a convenient time for a code swap}}}

We add the @Upgrade annotation because that’s the class we want to upgrade, and modify the code so that now the actor fails less often. Now, while our original program is still running, let’s rebuild our program’s JAR, by running gradle jar in a new terminal window. For those unfamiliar with Java, JAR (Java Archive) files are used to package Java modules (we’ll discuss modern Java packaging and deployment in part 2). Finally, in that second terminal, copy build/libs/jmodern.jar into our modules directory. In Linux/Mac:

cp build/libs/jmodern.jar modules

You’ll see the running program changing (depending on your OS, this can take up to 10 seconds). Note that unlike when we restarted BadActor after it failed, when we swap the code, its internal state (the value of counter) is preserved.

Designing fault-tolerant applications with actors is a big subject, but I hope you’ve now got a little taste of what’s possible.

Pluggable Types

Before signing off, we’ll venture into dangerous territory. The tool we’ll play with in this section cannot be added to the modern Java developer’s toolbelt just yet, as using it is still too cumbersome, and it would greatly benefit from IDE integration, which is currently very sketchy. Nevertheless, the possibilities it opens are so cool, that if the tool continues to be developed and fleshed out, and if it’s not overused in a frenzy, it might prove invaluable, and that is why it’s included here.

One of the potentially most powerful (and probably least discussed) new features in Java 8, is type annotations and pluggable type systems. The Java compiler now allows adding annotations wherever it allows specifying a type (we will shortly see an example). This, combined with the ability to plug annotation processors into the compiler, opens the door to pluggable type systems. These optional type systems, that can be turned on and off, can add powerful type-based static verification to Java code. The Checker framework is a library that allows (advanced) developers write their own pluggable type systems, complete with inheritence, type inference and more. It also comes pre-packaged with quite a few type systems, that verify nullability, tainting, regular expressions, physical units, immutability and more.

I haven’t been able to get Checker to work well with NetBeans, so for this section, we’ll continue without our IDE. First, let’s modify build.gradle a bit. We’ll merge the following:

configurations{checker}dependencies{checker'org.checkerframework:jdk8:1.9.0'compile'org.checkerframework:checker:1.9.0'}

into the respective configurations and dependencies sections.

Then, we’ll put this somewhere in the build file:

compileJava {
    options.fork = true
    options.forkOptions.jvmArgs = ["-Xbootclasspath/p:${configurations.checker.asPath}:${System.getenv('JAVA_HOME')}/lib/tools.jar"]
    options.compilerArgs = ['-processor', 'org.checkerframework.checker.nullness.NullnessChecker,org.checkerframework.checker.units.UnitsChecker,org.checkerframework.checker.tainting.TaintingChecker']
}

(as I said, cumbersome).

The last line says that we would like to use Checker’s nullness type system, the physical units type system, and the tainted data type system.

Now let’s run a few experiments. First, let’s try the nullability type system, which is supposed to prevent null pointer exceptions:

packagejmodern;importorg.checkerframework.checker.nullness.qual.*;publicclassMain{publicstaticvoidmain(String[]args){Stringstr1="hi";foo(str1);// we know str1 to be non-nullStringstr2=System.getProperty("foo");// foo(str2); // <-- doesn't compile as str2 may be nullif(str2!=null)foo(str2);// after the null test it compiles}staticvoidfoo(@NonNullStrings){System.out.println("==> "+s.length());}}

The Checker framework developers were kind enough to annotate the entire JDK for nullability return types, so you should be able to pass the return value of library methods that never return null as a @NonNull parameter (but I haven’t tried).

Next, let’s try the units type system, supposed to prevent unit conversion errors:

packagejmodern;importorg.checkerframework.checker.units.qual.*;publicclassMain{@SuppressWarnings("unsafe")privatestaticfinal@mintm=(@mint)1;// define 1 meter@SuppressWarnings("unsafe")privatestaticfinal@sints=(@sint)1;// define 1 secondpublicstaticvoidmain(String[]args){@mdoublemeters=5.0*m;@sdoubleseconds=2.0*s;// @kmPERh double speed = meters / seconds; // <-- doesn't compile@mPERsdoublespeed=meters/seconds;System.out.println("Speed: "+speed);}}

Cool. According to the Checker documentation, you can also define your own physical units.

Finally, let’s try the tainting type system, which helps you track tainted (potentially dangerous) data obtained, say, as a user input:

packagejmodern;importorg.checkerframework.checker.tainting.qual.*;publicclassMain{publicstaticvoidmain(String[]args){// process(parse(read())); // <-- doesn't compile, as process cannot accept tainted dataprocess(parse(sanitize(read())));}static@TaintedStringread(){return"12345";// pretend we've got this from the user}@SuppressWarnings("tainting")static@UntaintedStringsanitize(@TaintedStrings){if(s.length()>10)thrownewIllegalArgumentException("I don't wanna do that!");return(@UntaintedString)s;}// doesn't change the tainted qualifier of the data@SuppressWarnings("tainting")static@PolyTaintedintparse(@PolyTaintedStrings){return(@PolyTaintedint)Integer.parseInt(s);// apparently the JDK libraries aren't annotated with @PolyTainted}staticvoidprocess(@Untaintedintdata){System.out.println("--> "+data);}}

Checker gives Java pluggable (can be turned on or off) intersection types (you can have @m int or @m double), with type inference (e.g. a null check turns a @Nullable into a @NonNull), and type annotations can even be added to pre-compiled libraries with the help of a tool. Not even Haskell can do that!

Checker isn’t ready for primetime yet, but when it is, if used wisely, it could become one of the modern Java developer’s most powerful tools.

Wrapping Up (For Now)

We’ve seen how with the changes made in Java 8, along with modern tools and libraries, Java bears little resemblence to the Java of old. While the language still shines in large applications, the language and ecosystem now nicely compete with newer “simple” languages, which are less mature, less tested, less platform-independent, have much smaller ecosystems and almost always poorer performance than Java. We have learned how the modern Java programmer writes code, but we have hardly begun to unleash the full power of Java and the JVM. In particular, we are yet to see Java’s awesome monitoring and profiling tools, or its new, lean, web microframeworks. We will visit those topics in the upcoming blog posts.

In case you want to get a head start, in the next installment we will be discussing modern Java packaging (with Capsule, which is a little like npm, only much cooler), monitoring and management (with VisualVM, JMX, Jolokia and Metrics), profiling (with Java Flight Recorder, Mission Control, and Byteman), and benchmarking (with JMH). In part 3, we will discuss writing lightweight, scalable HTTP services with Dropwizard and Comsat, and Web Actors.

Here’s the Agreement Oculus Broke, According to ZeniMax

$
0
0

Game publisher ZeniMax said Thursday morning that Facebook-owned Oculus VR violated an intellectual property agreement regarding its forthcoming virtual reality headset, the Oculus Rift. Oculus denies the claim, but here’s what we know so far.

Until November of last year, Oculus CTO John Carmack was still an employee of ZeniMax subsidiary Id Software, which he co-founded in 1991. Id is perhaps best known for its groundbreaking first-person shooter games: Wolfenstein 3D, Doom and Quake.

The widely respected programmer’s interest in virtual reality, and his mentorship of Oculus co-founder Palmer Luckey, lent the startup a great deal of legitimacy early on.

Carmack told USA Today in February that part of the reason he left Id was that ZeniMax was not interested in letting him put future games like Doom 4 on the Rift. Today, he tweeted that the only thing his former bosses own is code.

In May 2012, a month before the Rift debuted at E3 2012, Id and Oculus signed a non-disclosure agreement governing how Carmack’s work on virtual reality as an Id employee could be shared between the companies.

A copy of the NDA obtained by Re/code says, in part, that Oculus “shall not acquire hereunder any right whatsoever to any proprietary information … nothing in this agreement is intended or shall be construed as a transfer, grant, license, release or waiver of any intellectual property rights in any proprietary information.”

The NDA notes that both parties were under “no commitment” to invest in one another or “enter into any other business arrangements of any nature whatsoever.” ZeniMax said in a statement that it discussed taking equity in Oculus but that talks between the two companies fell through.

Id legal EVP J. Griffin Lesher signed the NDA, as did Luckey. Carmack is not named, nor did he sign the document, but the contention is that all his work — not just code he wrote — as a ZeniMax employee was owned by the company, which expected compensation if that work ever made it into a money-making product. A source familiar with the dispute said Carmack had negotiated an exception to his ZeniMax contract for his aerospace startup, Armadillo Aerospace, but that no such exception was negotiated for virtual reality.

Oculus has been selling prototypes of the Rift since last year, albeit only to developers and not to consumers, from which it has made about $25.5 million; it was only after Facebook’s $2 billion purchase of the company, which received FTC approval last week, that ZeniMax decided to step forward.

Here’s the NDA Lesher and Luckey signed:



It's Different for Girls

$
0
0

Early in T/Maker’s life, I was working on a company-defining deal with a major PC manufacturer.  We were on track to do about a million in revenue that year:  This deal had the potential to bring in another quarter million, plus deliver millions of dollars in the years to come if it went well.  It was huge.

The PC manufacturer’s senior vice president who had been instrumental in crafting the deal suggested he and I sign over dinner in San Francisco to celebrate.  When I arrived at the restaurant, I found it a bit awkward to be seated at a table for four yet to be in two seats right next to each other, but it was a French restaurant and that seemed to be the style, so down I sat. 

Wine was brought and toasts were made to our great future together.  About halfway through the dinner he told me he had also brought me a  present, but it was under the table, and would I please give him my hand so he could give it to me.  I gave him my hand, and he placed it in his unzipped pants.

Yes, this really happened.

I left the restaurant very quickly.  The deal fell apart.  When I told my brother (T/Maker’s co-founder and chief software architect) what happened, he totally supported my decision to bolt. 

Years later, we decided to raise venture capital.  I was meeting with a Boston-based VC in his office.  He had a window behind his head and, unbeknownst to him or the other people in the office, I could see a reflection in that window of what was going on behind my head in the corridor (all-glass offices can be quite revealing in this way.)  As I pitched him, one of his partners engaged in a pantomime in the corridor, making a circle with the fingers of one hand while poking his other fingers through the circle, then thrusting his hips in a sexual fashion.  I found it rather hard to concentrate on my pitch.  I did not get a term sheet from that firm.

Luckily, I did get a term sheet from Hummer Winblad, we closed our series A with them and we continued to grow the business.  A few years later, I was pitching our B round at a Sand Hill firm.  This time, I was five months pregnant with my first child so I was pretty sure no one would be doing hip thrusts in the background.   The pitch had gone well and I was meeting with the partner who was going to lead the deal.  I was feeling the forward momentum, until the partner said the following:

“My partners are concerned that when you have this baby you are going to lose interest in the company and not be a good CEO.  How can you assure us that won’t happen?”

I did not get a term sheet from that firm, either.  But I did get a term sheet from DFJ, and they and Hummer Winblad went on to get a nice return for believing in me, even in all my pregnant glory. (And, this is one of the reasons why I am now a partner at DFJ – I have always found the DFJ crew to be incredibly supportive of women.)

Sadly, I have many stories like the above, and so do my fellow women entrepreneurs (though I leave it to them to divulge their own.)

What’s my point? 

Just that it is different for women entrepreneurs.  We face challenges that our male counterparts do not.

So what’s a girl to do?

In many situations, my answer is, you have to simply walk away. When I was a CEO, I operated under the principle that if I was not treated properly, it was not worth doing business with the other party.  I also believed that if one door was slammed in my face, there was always another door to knock on.  I was persistent, and lucky — I did find enough other doors that were accepting and I was able to build a successful business.

It pains and somewhat embarrasses me that I am not recommending calling out bad behavior and shaming the individual or individuals responsible.  In a perfect world people would have to account for their behavior.  But as an entrepreneur who spent years in a daily battle for existence, I did not feel like I could afford the hit I’d take in exposing these incidents.  (Again, not criminal behavior.  I suffered a few unwelcome gropes at late-night Comdex parties and the like, but never felt like I was in danger and I was always able to walk away unharmed.)

I do think things have improved, though of course I’m not an entrepreneur any more so perhaps it is situational.  I am still (sadly) often the only woman in the room – but my position as a board member in a room full of other board members and senior executives creates an environment where professionalism and civility tend to rule.  Plus, let’s be honest — I’m now in my mid-fifties so I have probably gone from the ‘tempting to grope’ category to the ‘likely to be invisible’ category.

I’ve also developed a pretty thick skin and don’t take offense at some things that the me-of-30-years-ago might have found offensive.  For each of us there is fine line between things that are colorful but harmless speech and things that are truly offensive — in fact I’ve been called out for using the expression “come to Jesus” by a devout Christian and “the pot calling the kettle black” by an African American entrepreneur – I had no idea those might be offensive to other people.  And in my British board meetings they use the expression “tits up” without a thought that I might find that a bit blush-worthy, and I’ve learned there’s no mal-intent behind their usage so I just let it go. 

That is why I encourage my fellow female trailblazers to look for the intent behind the words.  Offensive language is often unintentional, and sometimes you can turn an awkward situation into a bonding experience.

For example, during the dot-com bust, I was a partner at venture firm Mobius and we were dealing with a lot of trauma in our portfolio.  We held an offsite with all the deal partners plus our new general counsel Jason Mendelson (now a partner at Foundry and a fantastic venture capitalist and human being.)   As we reviewed the portfolio deal by deal, many of the deals needed more funding and at that time no VCs were following anyone else’s deals, so it was up to us to decide who would get more dough.  Each of us fought hard for every deal we managed.  After hearing about a dozen of these pleas, my partner Brad Feld (another mensch and great VC who is also a partner at Foundry) pushed back from the table, stood up, and said,

“This is bullshit.  Each one of us is just sitting here with his dick in his hand asking for more money without truly justifying it.”

Jason looked nervously at me, wondering how I was going to react. 

“This is making me very uncomfortable,” I said.

“Because I don’t even have a dick to hold.”

Without skipping a beat, Brad replied “well if you need a dick to hold you can borrow mine anytime.” 

I already knew Brad as a great guy and a huge supporter of women, and I took it for the joke it was intended to be.  Everyone laughed.  It broke the tension of the meeting and was a bonding moment for us all.

Frankly, I’m struggling with how to end this post, because there is no list of quick tips, no way to tie this topic up with a bright bow and be done with it.   I hope that by exposing my stories and my opinions I’m providing a perspective for my male readers to consider, one that they might not otherwise have had.  For my female readers, I hope this has offered some useful ways to think about situations they may face, and – if all else fails – at least provides the comforting knowledge that they are not alone.

‘~’ is being removed from Rust

$
0
0
RFC: Remove `~` in favor of `box` and `Box` by pcwalton · Pull Request #59 · rust-lang/rfcs · GitHub

Skip to content

Something went wrong with that request. Please try again.

A new book compiles knowledge necessary for society to recover after disaster

$
0
0
risk free title graphic

YES! Send me a free issue of Scientific American with no obligation to continue the subscription. If I like it, I will be billed for the one-year subscription.

cover image
Scientific American is a trademark of Scientific American, Inc., used with permission

© 2014 Scientific American, a Division of Nature America, Inc.

View Mobile SiteAll Rights Reserved.

Scientific American MIND iPad

Give a Gift & Get a Gift - Free!

Give a 1 year subscription as low as $14.99

Subscribe Now>>

X

Washington state sues Kickstarted game creator who failed to deliver

$
0
0
The Washington State Office of the Attorney General announced yesterday that it has filed what it believes to be America's first consumer protection lawsuit involving crowdfunding -- specifically, a Kickstarter campaign for a game.

The suit alleges that Edward J. Polchlepek III (aka Ed Nash) and his company, Altius Management, failed to make good on a successful Kickstarter campaign for Asylum Playing Cards.

The project beat its original $15,000 goal to raise $25,146 by the time it ended in October 2012. The Attorney General's office alleges Polchlepek and Altius collected the money and neglected to deliver either the cards or the various backer rewards. Some of those backers live in the state of Washington, which allows the state's legal team to get involved.

"Consumers need to be aware that crowdfunding is not without risk,” stated Washington State Attorney General Bob Ferguson in a press release announcing the lawsuit. “This lawsuit sends a clear message to people seeking the public’s money: Washington state will not tolerate crowdfunding theft. The Attorney General’s Office will hold those accountable who don’t play by the rules."

If you're curious, you can read the full text of the complaint on Scribd.

The outcome of this case could have significant ramifications for Kickstarter's popularity as a funding platform for game development. When contacted for comment by a Geekwire reporter, a Kickstarter representative issued the following statement:

"Tens of thousands of incredible projects have been brought to life through Kickstarter. We want every backer to have an amazing experience, and we’re frustrated when they don’t. We hope this process brings resolution and clarity to the backers of this project."


Global heatmap of cycling and running routes

$
0
0

What's this: This dataset includes 77,688,848 rides and 19,660,163 runs representing about 220 billion total data points.

A Strava Labs project, learn more. Contact maps -at- strava.com

Micro Python – a lean and efficient implementation of Python 3

$
0
0
Micro Python -- a lean and efficient implementation of Python 3

Damien Georgedamien.p.george at gmail.com
Tue Jun 3 14:27:11 CEST 2014


Hi,

We would like to announce Micro Python, an implementation of Python 3
optimised to have a low memory footprint.

While Python has many attractive features, current implementations
(read CPython) are not suited for embedded devices, such as
microcontrollers and small systems-on-a-chip.  This is because CPython
uses an awful lot of RAM -- both stack and heap -- even for simple
things such as integer addition.

Micro Python is a new implementation of the Python 3 language, which
aims to be properly compatible with CPython, while sporting a very
minimal RAM footprint, a compact compiler, and a fast and efficient
runtime.  These goals have been met by employing many tricks with
pointers and bit stuffing, and placing as much as possible in
read-only memory.

Micro Python has the following features:

- Supports almost full Python 3 syntax, including yield (compiles
99.99% of the Python 3 standard library).
- Most scripts use significantly less RAM in Micro Python, and various
benchmark programs run faster, compared with CPython.
- A minimal ARM build fits in 80k of program space, and with all
features enabled it fits in around 200k on Linux.
- Micro Python needs only 2k RAM for a basic REPL.
- It has 2 modes of AOT (ahead of time) compilation to native machine
code, doubling execution speed.
- There is an inline assembler for use in time-critical
microcontroller applications.
- It is written in C99 ANSI C and compiles cleanly under Unix (POSIX),
Mac OS X, Windows and certain ARM based microcontrollers.
- It supports a growing subset of Python 3 types and operations.
- Part of the Python 3 standard library has already been ported to
Micro Python, and work is ongoing to port as much as feasible.

More info at:

http://micropython.org/

You can follow the progress and contribute at github:

www.github.com/micropython/micropython
www.github.com/micropython/micropython-lib

--
Damien / Micro Python team.


More information about the Python-list mailing list

Micro Python -- a lean and efficient implementation of Python 3

Damien Georgedamien.p.george at gmail.com
Tue Jun 3 14:27:11 CEST 2014


Hi,

We would like to announce Micro Python, an implementation of Python 3
optimised to have a low memory footprint.

While Python has many attractive features, current implementations
(read CPython) are not suited for embedded devices, such as
microcontrollers and small systems-on-a-chip.  This is because CPython
uses an awful lot of RAM -- both stack and heap -- even for simple
things such as integer addition.

Micro Python is a new implementation of the Python 3 language, which
aims to be properly compatible with CPython, while sporting a very
minimal RAM footprint, a compact compiler, and a fast and efficient
runtime.  These goals have been met by employing many tricks with
pointers and bit stuffing, and placing as much as possible in
read-only memory.

Micro Python has the following features:

- Supports almost full Python 3 syntax, including yield (compiles
99.99% of the Python 3 standard library).
- Most scripts use significantly less RAM in Micro Python, and various
benchmark programs run faster, compared with CPython.
- A minimal ARM build fits in 80k of program space, and with all
features enabled it fits in around 200k on Linux.
- Micro Python needs only 2k RAM for a basic REPL.
- It has 2 modes of AOT (ahead of time) compilation to native machine
code, doubling execution speed.
- There is an inline assembler for use in time-critical
microcontroller applications.
- It is written in C99 ANSI C and compiles cleanly under Unix (POSIX),
Mac OS X, Windows and certain ARM based microcontrollers.
- It supports a growing subset of Python 3 types and operations.
- Part of the Python 3 standard library has already been ported to
Micro Python, and work is ongoing to port as much as feasible.

More info at:

http://micropython.org/

You can follow the progress and contribute at github:

www.github.com/micropython/micropython
www.github.com/micropython/micropython-lib

--
Damien / Micro Python team.


More information about the Python-list mailing list

Why Startups Need to Focus on Sales, Not Marketing

$
0
0

JESSICA LIVINGSTON: The most important thing an early-stage startup should know about marketing is rather counterintuitive: that you probably shouldn’t be doing anything you’d use the term “marketing” to describe. Sales and marketing are two ends of a continuum. At the sales end your outreach is narrow and deep. At the marketing end it is broad and shallow. And for an early stage startup, narrow and deep is what you want — not just in the way you appeal to users, but in the type of product you build. Which means the kind of marketing you should be doing should be indistinguishable from sales: you should be talking to a small number of users who are seriously interested in what you’re making, not a broad audience who are on the whole indifferent.

Successful startups almost always start narrow and deep. Apple started with a computer Steve Wozniak made to impress his friends at the Homebrew Computer Club. There weren’t a lot of them, but they were really interested.  Facebook started out just for Harvard University students. Again, not a lot of potential users, but they really wanted it. Successful startups start narrow and deep partly because they don’t have the power to reach a big audience, so they have to choose a very interested one. But also because the product is still being defined.  The conversation with initial users is also market research.

See what other startup mentors have to say about marketing tactics.

At  Y Combinator, we advise most startups to begin by seeking out some core group of early adopters and then engaging with individual users to convince them to sign up.

For example, the early adopters of Airbnb were hosts and guests in New York City (Y Combinator funded Airbnb in Winter of 2009).  To grow, Airbnb needed to get more hosts and also help existing hosts convert better. So Brian Chesky and Joe Gebbia flew to New York every week to meet with hosts — teaching them how to price their listings, take better photos, and so on. They also asked hosts for introductions to potential new hosts, who they then met in person.

Stripe (YC S09) was particularly aggressive about signing up users manually at first. The YC alumni network are a good source of early adopters for a service like Stripe. Cofounders Patrick and John Collison worked their way methodically through it, and when someone agreed to try Stripe, the brothers would install it for them on the spot rather than email a link. We now call their technique “Collison installation.”

Many guest speakers at Y Combinator offer stories about how manual the initial process of getting users was. Pinterest is a mass consumer product, but Ben Silbermann said even he began by recruiting users manually. Ben would literally walk into cafes in Palo Alto and ask random people to try out Pinterest while he gathered feedback over their shoulders.

The danger of the term “marketing” is that it implies the opposite end of the sales/marketing spectrum from the one startups should be focusing on. And just as focusing on the right end has a double benefit — you acquire users and define the product — focusing on the wrong end is doubly dangerous, because you not only fail to grow, but you can remain in denial about your product’s lameness.

All too often, I’ve seen founders build some initially mediocre product, announce it to the world, find that users never show up, and not know what to do next. As well as not getting any users, the startup never gets the feedback it needs to improve the product.

So why wouldn’t all founders start by engaging with users individually? Because it’s hard and demoralizing. Sales gives you a kind of harsh feedback that “marketing” doesn’t.  You try to convince someone to use what you’ve built, and they won’t. These conversations are painful, but necessary. I suspect from my experience that founders who want to remain in denial about the inadequacy of their product and/or the difficulty of starting a startup subconsciously prefer the broad and shallow “marketing” approach precisely because they can’t face the work and unpleasant truths they’ll find if they talk to users.

How should you measure if your manual efforts are effective?  Focus on growth rate rather than absolute numbers. Then you won’t be dismayed if the absolute numbers are small at first. If you have 20 users, you only need two more this week to grow 10%. And while two users is a small number for most products, 10% a week is a great growth rate. If you keep growing at 10% a week, the absolute numbers will eventually become impressive.

Our advice at Y Combinator is always to make a really good product and go out and get users manually. The two work hand-in-hand: you need to talk individually to early adopters to make a really good product.  So focusing on the narrow and deep end of the sales/marketing continuum is not just the most effective way to get users. Your startup will die if you don’t.

Camlistore – open-source personal storage system for life

$
0
0

Camlistore is a set of open source formats, protocols, and software for modeling, storing, searching, sharing and synchronizing data in the post-PC era. Data may be files or objects, tweets or 5TB videos, and you can access it via a phone, browser or FUSE filesystem.

Camlistore (Content-Addressable Multi-Layer Indexed Storage) is under active development. If you're a programmer or fairly technical, you can probably get it up and running and get some utility out of it. Many bits and pieces are actively being developed, so be prepared for bugs and unfinished features.

Join the community, consider contributing, or file a bug.

Things Camlistore believes:

  • Your data is entirely under your control
  • Open Source
  • Paranoid about privacy, everything private by default
  • No SPOF: don't rely on any single party (including yourself)
  • Your data should be alive in 80 years, especially if you are

Latest Release

The latest release is 0.7 ("Brussels"), released 2014-02-27.

Follow the download and getting started instructions to set up Camlistore.

Video Demo

FOSDEM 2014 Camlistore presentation:

Or see the older presentations.

Contribute

In addition to user feedback, bug reports, and code contributions, we also accept Bitcoin:

Donate Bitcoins
All donations help fund full-time Camlistore developers (but not Brad or other Google employees)

Chris Lattner on Swift

$
0
0

I have worked for Apple since 2005, holding a number of different positions over the years (a partial history is available in the Apple section of my résumé). These days, I run the Developer Tools department, which is responsible for Xcode and Instruments, as well as compilers, debuggers, and related tools.

To answer a FAQ: Yes, I do still write code and most of it goes to llvm.org. However, due to the nature of the work, I usually can't talk about it until a couple of years after it happens. :)

I started work on the Swift Programming Language (wikipedia) in July of 2010. I implemented much of the basic language structure, with only a few people knowing of its existence. A few other (amazing) people started contributing in earnest late in 2011, and it became a major focus for the Apple Developer Tools group in July 2013.

The Swift language is the product of tireless effort from a team of language experts, documentation gurus, compiler optimization ninjas, and an incredibly important internal dogfooding group who provided feedback to help refine and battle-test ideas. Of course, it also greatly benefited from the experiences hard-won by many other languages in the field, drawing ideas from Objective-C, Rust, Haskell, Ruby, Python, C#, CLU, and far too many others to list.

The Xcode Playgrounds feature and REPL were a personal passion of mine, to make programming more interactive and approachable. The Xcode and LLDB teams have done a phenomenal job turning crazy ideas into something truly great. Playgrounds were heavily influenced by Bret Victor's ideas, by Light Table and by many other interactive systems. I hope that by making programming more approachable and fun, we'll appeal to the next generation of programmers and to help redefine how Computer Science is taught.

I lead and am the original author of the LLVM Compiler Infrastructure, an open source umbrella project that includes all sorts of toolchain related technology: compilers, debuggers, JIT systems, optimizers, static analysis systems, etc. I started both LLVM and Clang and am still the individual with the most commits. Of course, as the community has grown, my contribution is being dwarfed by those from a wide range of really amazing folks.

LLVM has enjoyed broad industry success - being widely used in commercial products - as well supporting hundreds of academic papers. For its contribution to the software industry, LLVM has been recognized with the ACM Software System Award.

For more details about LLVM, see:

  1. LLVM Compiler Infrastructure home page
  2. Invited talks about LLVM and other topics
  3. Random notes on LLVM - Unofficial notes and thoughts on LLVM extensions and todo items.

Here are some of my more notable publications from my graduate school work. A more complete list can be found on my resume.

A first-person engine in 265 lines of JS

$
0
0

A first-person engine in 265 lines

Today, let's drop into a world you can reach out and touch. In this article, we'll compose a first-person exploration from scratch, quickly and without difficult math, using a technique called raycasting. You may have seen it before in games like Daggerfall and Duke Nukem 3D, or more recently in Notch Persson's ludum dare entries. If it's good enough for Notch, it's good enough for me! [Demo (arrow keys / touch)][Source]

Raycasting feels like cheating, and as a lazy programmer, I love it. You get the immersion of a 3D environment without many of the complexities of "real 3D" to slow you down. For example, raycasts run in constant time, so you can load up a massive world and it will just work, without optimization, as quickly as a tiny world. Levels are defined as simple grids rather than as trees of polygon meshes, so you can dive right in without a 3D modeling background or mathematics PhD.

It's one of those techniques that blows you away with simplicity. In fifteen minutes you'll be taking photos of your office walls and checking your HR documents for rules against "building workplace gunfight simulations."

The Player

Where are we casting rays from? That's what the player is all about. We need just three properties: x, y, and direction.

function Player(x, y, direction) {
  this.x = x;
  this.y = y;
  this.direction = direction;
}

The Map

We'll store our map as a simple two-dimensional array. In this array, 0 represents no wall and 1 represents wall. You can get a lot more complex than this... for example, you could render walls of arbitrary heights, or you could pack several 'stories' of wall data into the array, but for our first attempt 0-vs-1 works great.

function Map(size) {
  this.size = size;
  this.wallGrid = new Uint8Array(size * size);
}

Casting a ray

Here's the trick: a raycasting engine doesn't draw the whole scene at once. Instead, it divides the scene into independent columns and renders them one-by-one. Each column represents a single ray cast out from the player at a particular angle. If the ray hits a wall, it measures the distance to that wall and draws a rectangle in its column. The height of the rectangle is determined by the distance the ray traveled - more distant walls are drawn shorter.

Raycasting basic idea

The more rays you draw, the smoother the result.

1. Find each ray's angle

First, we find the angle at which to cast each ray. The angle depends on three things: the direction the player is facing, the field-of-view of the camera, and which column we're currently drawing.

var angle = this.fov * (column / this.resolution - 0.5);
var ray = map.cast(player, player.direction + angle, this.range);

2. Follow each ray through the grid

Next, we need to check for walls in each ray's path. Our goal is to end up with an array that lists each wall the ray passes through as it moves away from the player.

Raycaster grid

Starting from the player, we find the nearest horizontal (stepX) and vertical (stepY) gridlines. We move to whichever is closer and check for a wall (inspect). Then we repeat until we've traced the entire length of each ray.

function ray(origin) {
  var stepX = step(sin, cos, origin.x, origin.y);
  var stepY = step(cos, sin, origin.y, origin.x, true);
  var nextStep = stepX.length2 < stepY.length2
    ? inspect(stepX, 1, 0, origin.distance, stepX.y)
    : inspect(stepY, 0, 1, origin.distance, stepY.x);

  if (nextStep.distance > range) return [origin];
  return [origin].concat(ray(nextStep));
}

Finding grid intersections is straightforward: just look for whole numbers of x (1, 2, 3, etc). Then, find a matching y by multiplying by the line's slope (rise / run).

var dx = run > 0 ? Math.floor(x + 1) - x : Math.ceil(x - 1) - x;
var dy = dx * (rise / run);

Did you notice what's awesome about this part of the algorithm? We don't care how big the map is! We're only looking at specific points on the grid - approximately the same number of points each frame. Our example map is 32 x 32, but a map that's 32,000 x 32,000 would run just as quickly!

3. Draw a column

Once we've traced a ray, we need to draw any walls that it found in its path.

  var z = distance * Math.cos(angle);
  var wallHeight = this.height * height / z;

We determine the height of each wall by dividing its maximum height by z. The further away a wall is, the shorter we draw it.

Oh damn, where did this cosine come in? If we just use the raw distance from the player, we'll end up with a fisheye effect. Why? Imagine that you're facing a wall. The edges of the wall to your left and right are further away from you than the center of the wall. But you don't want straight walls to buldge out in the middle! To render flat walls as we really see them, we build a triangle out of each ray and find the perpendicular distance to the wall with cosine. Like this:

Raycaster distance

And I promise, that's the hardest math in this whole thing.

Render the damn thing!

Let's use a Camera object to draw the map each frame from the player's perspective. It will be responsible for rendering each strip as we sweep from the left to the right of the screen.

Before it draws the walls, we'll render a skybox - just a big picture in the background with stars and a horizon. After the walls are done we'll drop a weapon into the foreground.

Camera.prototype.render = function(player, map) {
  this.drawSky(player.direction, map.skybox, map.light);
  this.drawColumns(player, map);
  this.drawWeapon(player.weapon, player.paces);
};

The camera's most important properties are resolution, field-of-view (fov), and range.

  • Resolution determines how many strips we draw each frame: how many rays we cast.
  • Field-of-view determines how wide of a lens we're looking through: the angles of the rays.
  • Range determines how far away we can see: the maximum length of each ray.

Putting it all together

We'll use a Controls object to listen for arrow keys (and touch events) and a GameLoop object to call requestAnimationFrame. Our simple gameloop is just three lines:

loop.start(function frame(seconds) {
  map.update(seconds);
  player.update(controls.states, map, seconds);
  camera.render(player, map);
});

The details

Rain

Rain is simulated with a bunch of very short walls in random places.

var rainDrops = Math.pow(Math.random(), 3) * s;
var rain = (rainDrops > 0) && this.project(0.1, angle, step.distance);

ctx.fillStyle = '#ffffff';
ctx.globalAlpha = 0.15;
while (--rainDrops > 0) ctx.fillRect(left, Math.random() * rain.top, 1, rain.height);

Instead of drawing the walls at their full width, we draw them one pixel wide.

Lighting and lightning

The lighting is actually shading. All walls are drawn at full brightness, and then covered with a black rectangle of some opacity. The opacity is determined by distance as well as by the wall's orientation (N/S/E/W).

ctx.fillStyle = '#000000';
ctx.globalAlpha = Math.max((step.distance + step.shading) / this.lightRange - map.light, 0);
ctx.fillRect(left, wall.top, width, wall.height);

To simulate lightning, map.light randomly spikes to 2 and then quickly fades down again.

Collision detection

To prevent the player from walking through walls, we just check his future position against our map. We check x and y independently so the player can slide along a wall:

Player.prototype.walk = function(distance, map) {
  var dx = Math.cos(this.direction) * distance;
  var dy = Math.sin(this.direction) * distance;
  if (map.get(this.x + dx, this.y) <= 0) this.x += dx;
  if (map.get(this.x, this.y + dy) <= 0) this.y += dy;
};

Wall textures

The walls would be pretty boring without a texture. How do we know which part of the wall texture to apply to a particular column? It's actually pretty simple: we take the remainder of our intersection point.

step.offset = offset - Math.floor(offset);
var textureX = Math.floor(texture.width * step.offset);

For example, an intersection with a wall at (10, 8.2) has a remainder of 0.2. That means that it's 20% from the left edge of the wall (8) and 80% from the right edge (9). So we multiply 0.2 * texture.width to find the x-coordinate for the texture image.

Try it out

Wander around the creepy ruins.

What's next?

Because raycasters are so fast and simple, you can try lots of ideas quickly. You could make a dungeon crawler, first-person shooter, or a grand-theft-auto style sandbox. Hell, the constant-time makes me want to build an oldschool MMORPG with a massive, procedurally generated world. Here are a few challenges to get you started:

  • Immersion. This example is begging for full-screen mouse-lock with a rainy background and thunderclaps synchronized to the lightning.
  • An indoors level. Replace the skybox with a symmetric gradient or, if you're feeling plucky, try rendering floor and ceiling tiles (think of it this way: they're just the spaces between the walls you're already drawing!)
  • Lighting objects. We already have a fairly robust lighting model. Why not place lights in the world and compute wall lighting based on them? Lights are 80% of atmosphere.
  • Good touch events. I've hacked in a couple of basic touch controls so folks on phones and tablets can try out the demo, but there's huge room for improvement.
  • Camera effects. For example, zooming, blurring, drunk mode, etc. With a raycaster this are surprisingly simple. Start by modifying camera.fov in the console.

As always, if you build something cool, or have related work to share, email me or tweet me and I'll shout it from the rooftops.

Discuss

Join the discussion on Hacker News.

Credits

This "two hour" article turned into a "three week" article, and it would never have been released without several peoples' help:

End-To-End – OpenPGP Chrome extension from Google

$
0
0

End-To-End is a Chrome extension that helps you encrypt, decrypt, digital sign, and verify signed messages within the browser using OpenPGP.

This is the source code for the alpha release of the End-To-End Chrome extension. It's built upon a newly developed, JavaScript-based crypto library. End-To-End implements the OpenPGP standard, IETF RFC 4880, enabling key generation, encryption, decryption, digital signature, and signature verification. We’re releasing this code to enable community review; it is not yet ready for general use.

For more background, please see our blog post.


Since this is source, I could just build this and submit it to the Chrome Web Store

Please don’t do this.

The End-To-End team takes its responsibility to provide solid crypto very seriously, and we don’t want at-risk groups that may not be technically sophisticated — journalists, human-rights workers, et al — to rely on End-To-End until we feel it’s ready. Prematurely making End-To-End available could have very serious real world ramifications.

One of the reasons we are doing this source code release is precisely so that the community as a whole can help us make sure that we haven’t overlooked anything in our implementation of End-To-End.

Once we feel that End-To-End is ready, we will release it via the Chrome Web Store ourselves.

Is my public key exported somewhere when I provide my e-mail address?

No. The public key stays local and isn’t exported unless you explicitly perform that action.

Does End-To-End work on enclosures or only the body of a Gmail message?

Only the body of the message. Please note that, as with all OpenPGP messages, the email subject line and list of recipients remain unencrypted.

I forgot my keyring passphrase!

If you forget your keyring’s passphrase, there is no way to recover your local keys. Please delete the extension, reinstall the extension, and then import the keys from your backup.

How do I set a passphrase on my key?

End-To-End implements passphrases per-keyring, not per-key as other OpenPGP software often does.

Our goal with this choice is to minimize the number of (additional) passphrases you have to remember and enter. The End-To-End keyring passphrase is used to encrypt the keyring when it's persisted to localStorage. Each time the End-To-End extension loads, it will require the passphrase to be entered once to decrypt the keyring.

If you import a private key that has a passphrase, End-To-End will ask you for that key's passphrase and decrypt the key. The imported key is then treated just like any other key.

How does End-To-End find a public key when I send a message?

The public key of a recipient needs to be imported into the local keyring before End-To-End can encrypt to it, or verify a signature from it.

What happens if I delete my key and generate a new one?

Your old key will be lost forever. Unless you backed it up, of course.

How can I just sign a message without encrypting it?

To only sign, delete all of the addresses from the recipient’s box in End-To-End’s compose window.

I’d like to import my End-To-End-generated private key into other OpenPGP software.

End-To-End generates Elliptic Curve-based keys, so they're only supported in GnuPG 2.1 and later, as well as Symantec’s PGP software, but not in GnuPG 1.x or 2.0. We recommend that you either generate a key that you will use with the extension from now on, or generate a non-EC key in other OpenPGP software and import that.

Please note that EC support was added to GnuPG 2.1 beta in 2010, but it hasn’t been released as a stable version yet. To communicate with other people that don't use End-To-End, you will need to either generate a key in GnuPG and then import it, or build GnuPG 2.1 yourself.

There're no mentions of public and private keyrings; where are they?

End-To-End uses a single keyring that contains both private and public keys. You can export individual keys from within the Keys and Settings page.

Why do you only support Elliptic Curve (EC) key generation?

Generating RSA keypairs is very significantly slower than generating EC-based ones. EC-based keys are just as secure. Symantec’s PGP software and GnuPG 2.1 beta both support EC-based keys; we are greatly looking forward to a stable version of GnuPG 2.1 with EC support becoming available.

Please note that you can import existing, non-EC-based keys into End-To-End.

Will End-To-End work on mobile devices?

Not at the moment. End-To-End is implemented as a Chrome extension, and Chrome on mobile devices doesn’t support extensions.

Which RFCs does End-To-End support?

RFC 4880— OpenPGP Message Format

RFC 6637— Elliptic Curve Cryptography (ECC) in OpenPGP

End-To-End does not currently support RFC 3156 or RFC 5581.

I’ve found mojibake!

We've made efforts to prevent mojibake—for all non-Roman character encodings, not just Japanese—within messages, but you should not be surprised to encounter mojibake in non-message strings, including User IDs.

We perform no automatic character set detection and rely on the presence of the OpenPGP Charset header.

Are the private key(s) kept in memory, are they always purged after every operation, or is there a passphrase cache?

The private keys are kept in memory unencrypted. We recommend making sure your keyring has a passphrase so that private keys are stored encrypted in localStorage.

How safe are private keys in memory?

In memory, the private key is sandboxed by Chrome from other things. When private keys are in localStorage they’re not protected by Chrome’s sandbox, which is why we encrypt them there.

Please note that enabling Chrome’s "Automatically send usage statistics and crash reports to Google" means that, in the event of a crash, parts of memory containing private key material might be sent to Google.

Implementing crypto in JavaScript is considered heretical by some. When we started work on End-To-End, there was no JavaScript crypto library that met our needs, so we built our own. During development we took into consideration all the criticisms and risks that we are aware of, and invested effort to mitigate these risks as much as possible.

JavaScript has no native support for many core features used by cryptographic code

Modern JavaScript engines have Typed Arrays; CS-PRNG is available thanks to WebCrypto.

JavaScript cryptographic projects have had serious vulnerabilities in the past, reducing trust in JavaScript as an implementation language

In practice, no common programming language prevents the code from having vulnerabilities.

We hold ourselves to a higher standard; we started from scratch and created a testable, modern, cryptographic library. We created this new core library for End-To-End with support for BigInteger, modular arithmetic, Elliptic Curve, as well as symmetric and public-key encryption. Having done that, we then developed an OpenPGP implementation on top of it.

Parts of End-To-End’s library are already in use within Google. We hope our code will be used widely in future JS cryptographic projects.

JavaScript crypto has very real risk of side-channel attacks

Since JavaScript code doesn't control the instructions being executed by the CPU — the JavaScript engine can perform optimizations out of the code’s control — it creates the risk of security-sensitive information leaks.

End-To-End requires user interaction for private operations in normal use, mitigating this risk. Non-user-interaction actions are rate-limited and done in fixed time. End-To-End’s crypto operations are performed in a different process from the web apps it interacts with.

The End-To-End library is as timing-aware it can be and we’ve invested effort to mitigate any exploitable risk.

JavaScript code doesn't control memory; it's not really possible to reliably delete intermediate values

The threat model we are trying to address discounts adversaries with physical access and users with malware outside the browser. Chrome’s design means that extensions should be safe against other extensions. Adversaries with this level of access have a plethora of attacks available to compromise data even in systems that control their memory carefully and wipe it.

What about XSS and related bugs

End-To-End uses Content Security Policy as well as inherently safe APIs in frameworks (strict Closure Templates). End-To-End doesn’t trust any website's DOM or context with unencrypted data. We have tried to ensure that the interaction between the extension and websites is minimal and does not reveal secrets to the website.

Are End-To-End security bugs eligible for Google’s Vulnerability Rewards Program?

Yes, we have specifically expanded the scope of our Vulnerability Rewards Program to include End-To-End. This means that reports of exploitable security bugs within End-To-End are eligible for a reward.

What about other bugs?

One of the reasons our first launch is source code only is because our past experience tells us that brand new implementations of the OpenPGP standard need time to mature. We have every expectation of encountering interesting bugs, particularly ones related to interop with other OpenPGP software, and existing keys and messages.


The security hole I found on Amazon.com

$
0
0

I found a security hole on Amazon last August. While looking at their HTTP headers, I happened to notice that the entire amazon.com domain was susceptible to clickjacking attacks. If I could trick you into clicking anywhere on a webpage I controlled, I could get you to buy any product that’s available for sale on Amazon. By the way, that includes any fake products that I added to Amazon myself. For the hack to work, you needed to be signed into your Amazon account and have one-click purchasing turned on. I created a working proof-of-concept that looked like this:

amazon-clickjacking

Clicking either button caused an instant purchase of the movie Click (get it?). I resisted the temptation to use the exploit to send myself a million dollars worth of free Amazon gift cards, and instead responsibly disclosed it to the Amazon security team. It took them months to fix it, but the security hole has finally been closed using the x-frame-options header that I recommended.

This hack is classic clickjacking. I created a transparent iframe containing a product page on amazon.com that had been carefully positioned so when you think you’re clicking on my page, you’re actually clicking the “Buy now” button on their site instead. Here’s the link to the code for the no longer working proof of concept.

The Red Hourglass: Self-Experimentation with Black Widow Bites

$
0
0
The Red Hourglass (Gordon Grice)

book cover

first pullquote

second pullquote

 Blair had been keeping widows in his laboratory for experiments on animals. (One of his experiments proved even the widow's eggs are toxic to mice.) He and his colleagues and assistants had collected the spiders from the wild; widows were plentiful around Tuscaloosa, Alabama. Blair captured Spider 111.33 in a rock pile near his own home on October 25, 1933. Like the other captive widows in Blair's laboratory, she was kept in a jar and provided with live insects. A water beetle became her last meal before the experiment. Then she went hungry for two weeks. Since earlier experimenters, like Baerg, had sometimes found it difficult to provoke a widow into biting, Blair wanted his spider hungry and irritable before he made any attempt to get bitten. (Incidentally, two weeks without food is a cakewalk for a widow. Other scientists working with a similar setup--many numbered widows in jars on shelves--once found that they had misplaced one widow at the back of a shelf for nine months. When they found her, she was still alive and eager to eat.)

On November 12, Spider 111.33 was, in Blair's words, "of moderate size, active and glossy black, with characteristic adult markings"--he means the red hourglass--"and appeared to be in excellent condition." Blair described himself as "aged 32, weighing 168 pounds...athletically inclined and in excellent health." A former college football player, Blair had just won the university's faculty tennis championship. He had monitored his body for a week and found his condition "normal." He had no particular sensitivity to mosquitoes or bees.

At ten forty-five in the morning, Blair used a small forceps to pick Spider 111.33 up by the abdomen and place her on his left hand. Without being prompted, she immediately bit him near the tip of his little finger, "twisting the cephalothorax from side to side as though to sink the claws of the chelicerae deeper into the flesh." The bite felt like a needle prick and a burn at the same time. Blair let the spider bite him for ten seconds, the burning growing more intense all the while. He removed the widow, putting it back into its jar unharmed.

A drop of "whitish fluid, slightly streaked with brown" beaded at the wound--venom laced with Blair's blood. The wound itself was so small that Blair couldn't see it even with a magnifying glass.

Blair's right hand was busy taking notes. Two minutes after the bite, he recorded a "bluish, pinpoint mark" where he had been bitten; the mark was surrounded by a disk of white skin. The finger was "burning." Soon the tip of the finger turned red, except for the pale area around the bite. The pain became "throbbing, lancinating."

Fifteen minutes after the bite, the pain had spread past the base of Blair's little finger. The side of his hand felt a bit numb. The area around the bite was sweating. The pain quickly traveled up his hand and arm, but it still was worst at the tip of his finger, which had swollen into a purple-red sausage.

At the twenty-two-minute mark, the vanguard of the pain had spread to Blair's chest, and the worst of it had progressed to his armpit, though the finger continued to throb. Noting the pain in the lymph node near his elbow, Blair deduced that the toxin had traveled through his lymphatic system.

Fifty minutes after the bite, Blair realized that the toxin was traveling in his blood. He felt "dull, drowsy, lethargic"; his blood pressure dropped; his pulse weakened; his breathing seemed deep. His white count began the steep climb it would continue throughout that day and night. His blood pressure and pulse continued to worsen.

Soon he felt flushed and had a headache and a pain in his upper belly. Malaise and pain in the neck muscles developed. Blair turned the note-taking duties over to his assistants. Shortly after noon, he noted that his legs felt "flushed, trembly" and his belly ached and was "tense." A rigid, pain-racked abdomen is a classic black widow symptom, as Blair knew from his study of other doctors' cases. He must have suspected he was about to experience pain much, much worse than he already felt. He asked to be taken to the hospital, which was three miles away. The ride took fifteen minutes, during which, as they say in politics, the situation deteriorated.

At half-past noon, Blair was at the hospital. His pulse was "weak and thready." His belly was rigid and racked with pain. His lower back ached. His chest hurt and felt "constricted.""Speech was difficult and jerky," he wrote later, adding in the detached tone obligatory for the medical journal in which he published his results, "respirations were rapid and labored, with a sharp brisk expiration accompanied by an audible grunt."

Blair's pains made it difficult for him to lie down for electrocardiograms--in fact, an assistant dutifully wrote down that he described it as "torture"--but he managed to lie still, and the EKGs proved normal. Hearing about the painful EKGs later, newspaper reporters wrongly assumed the venom had injured Blair's heart. That myth was repeated and embroidered in the press for decades, giving the widow's danger a spurious explanation easier for casual readers to grasp: heart attack.

Two hours after the bite, Blair lay on his side in fetal position. The pain had reached his legs. His "respirations were labored, with a gasping inspiration and a sharp, jerky expiration accompanied by an uncontrollable, loud, groaning grunt." He could not straighten his body, which was rigid and trembling; he certainly couldn't stand. His skin was pale and "ashy" and slick with clammy sweat. In short, he had fallen into deep shock. The bitten finger had turned blue.

Folk remedies reported from places as diverse as Madagascar and southern Europe involved the use of heat, and some doctors had reported hot baths and hot compresses helpful. William Baerg had attested the pain-relieving power of hot baths during his stay in the hospital. Blair decided to try this treatment on himself. As soon as his body was immersed, he felt an almost miraculous reduction of his pain, though it was still severe. His breath laboring, his forearms and hands jerking spastically, he allowed a nurse to take his blood pressure and pulse. His systolic pressure was 75; the diastolic pressure was too faint to determine with a cuff and stethoscope. His pulse remained weak and rapid--too rapid to count.

Forty-five minutes after Blair had arrived at the hospital, his colleague J. M. Forney arrived to take care of him. Forney found Blair lying in the bathtub, gasping for breath, his face contorted into the sweat-slick, heavy-lidded mask that has since come to be recognized as a typical symptom of widow bite. Blair said he felt dizzy. Forney later commented, "I do not recall having seen more abject pain manifested in any other medical or surgical condition."

After soaking for more than half an hour, Blair was removed from the bath, red as a boiled lobster. His breathing, like his pains, had improved as a result of the bath. Fifteen minutes later, both the ragged breathing and the pain were back at full force. Blair writhed in the hospital bed. Hot water bottles were packed against his back and belly, again reducing his pain. Perspiration poured from him, drenching his sheets. His blood pressure was 80 over 50. His pulse was a weak 120. He accepted an injection of morphine to help with the pain.

Blair continued to gulp down water. Sweat poured out of him and would for days, leaving him little moisture for producing urine. A red streak appeared on his left hand. He vomited and had diarrhea; he couldn't eat. In the evening of the first day, his blood pressure rebounded to 154 over 92; it stayed high for a week. His face swelled; his eyes were bloodshot and watery.

The night was terrible. He felt restless and could not sleep. The pain persisted. He had chills. A dose of barbiturates didn't help. He was in and out of hot baths all night. Sometime in that night the worst part came. Blair felt he couldn't endure any more pain. He said he was about to go insane; he was holding on only by an effort of his steadily weakening will. His caregivers injected him with morphine again.

The next day, his hands trembling, his arm broken out in a knobby rash, his breath stinking, his features distorted by swelling, Blair was still in pain, but he knew he was-getting better. In the evening, as he sat guzzling orange juice, sweat pouring from his body, his worst symptom was pain in the legs.

By the third day, Blair was able to sleep and eat a little. His boardlike abdomen had finally relaxed. He was beginning to look like himself again as his swollen face returned to its normal proportions. He went home that day. It took about a week for all the serious symptoms to vanish. After that, his body itched for two more weeks, and the skin on his hands and feet peeled as if burned.

Blair later returned to his native Saskatchewan, where he had an illustrious career in cancer treatment and research. When he died of heart trouble at age forty-seven, prime ministers and other public figures eulogized him. The story of his black widow experiment, which the wire service had named one of the top ten human interest stories of 1933, was retold in the papers at his death, and one more accretion of myth was added to the story when his heart trouble was falsely attributed to the bite of the black widow sixteen years before.

Blair's ordeal convinced the skeptics the widow's bite is toxic and potentially deadly. Thousands of cases of latrodectism, as widow poisoning is called, have been documented since then. The variation in symptoms from one person to the next is remarkable, making some cases hard to diagnose. The constant is pain, usually all over the body but concentrated in the belly, legs, and lower back. Often the soles of the feet hurt--one woman said she felt as if someone were ripping off her toenails or taking an iron to her feet.

Some doctors trying to diagnose an uncertain case ask, "Is this the worst pain you've ever felt?" A "yes" suggests a diagnosis of black widow bite. Several doctors have made remarks similar to Forney's, about the widow causing the worst human suffering they ever witnessed (though one ranked the widow's bite second to tetanus, which is sometimes a complication of widow bite). One of the questions Blair had in mind when he began his experiment was whether people acquire immunity over successive bites. He never answered this question because, as he frankly admitted, he was afraid of having another experience like his first.

Besides pain, several other symptoms appear regularly in widow victims, and Blair's suffering provided examples of most of them: a rigid abdomen, the "mask of latrodectism" (a distorted face caused by pain and involuntary contraction of muscles), intense sweating (the body's attempt to purge the toxin), nausea, vomiting, swelling. A multitude of other symptoms have occurred in widow bite cases, including convulsions, fainting, paralysis, and amnesia. Baerg and a number of other victims reported nightmares and sleep disturbances after the life-threatening phase of their reactions had passed.

Blair's fear for his sanity was not unusual either. Other patients have expressed similar fears, and some, like Baerg, have lapsed into delirium. Some have tried to kill themselves to stop the pain. (A few people have intentionally tried to get bitten as a method of suicide. It would be hard to imagine a method at once so uncertain and so painful.)

The venom contains a neurotoxin that accounts for the pain and the system-wide effects like roller-coaster blood pressure. But this chemical explanation only opens the door to deeper mysteries. A dose of the venom contains only a few molecules of the neurotoxin, which has a high molecular weight--in fact, the molecules are large enough to be seen under an ordinary microscope. How do these few molecules manage to affect the entire body of an animal weighing hundreds or even thousands of pounds? No one has explained the specific mechanism. It seems to involve a neural cascade, a series of reactions initiated by the toxin, but with the toxin not directly involved in any but the first steps of the process. The toxin somehow flips a switch that activates a self-torture mechanism.

People sometimes die from widow bites. Thorpe and Woodson report the case of a two-year-old boy who was walking in the garden with his grandfather when he said his big toe hurt. He soon fell unconscious. Within an hour he lay dead. The grandfather went to the spot in the garden where the boy had felt the pain. He turned over a rock. A black widow, suddenly exposed, wobbled away over the flagstones.

Widow bites kill old people with greater-than average frequency, apparently because they're especially susceptible to some of the secondary effects. The high blood pressure, for example, kills some victims via stroke or heart attack. That's what happened to Harry Carey, an actor best known for his character roles in John Wayne Westerns. A black widow bit him while he was working on Red River; he died of a heart attack.

Many of the symptoms reported for widow bites are actually symptoms of such complications. Anybody who already has a serious medical problem runs a big risk when bitten by a widow. One man with a chronic kidney problem died from a bite, the toxin overtaxing his diseased kidneys as they tried to clean his blood. Another common complication, and a proven killer in widow bite cases, is infection. The widow's habit of dwelling in outhouses and piles of trash can make her bite septic. Besides tetanus, encephalitis and gruesome staph infections of the skin have also killed bite victims.

Read an interview with Gordon Grice.

 

I Sold My Startup for $25.5 Million

$
0
0
140606_TECH_PerfectAudienceIt turns out that closing is actually really hard to do.

Photo illustration by Slate

I sold my startup for $25.5 million on Monday just after 2:23 p.m. Pacific Time.

Selling the company, Perfect Audience, to Marin Software took six months of writing carefully worded emails, meeting secretly in cafés, and pacing around the streets of San Francisco’s SoMa neighborhood after dark. In the end, I sold Perfect Audience—a software platform which helps small businesses buy online ads—on a phone call on which I barely said anything at all. Our lawyers conferred with their lawyers. It was agreed that after weeks of due diligence, the seemingly 14,213 closing conditions had finally been met. Marin’s lawyers declared the deal “closed,” everyone dropped off the conference call, and my company officially belonged to someone else.

Perfect Audience started as an ad design product called NowSpots, which was itself spun out of a previous company called Windy Citizen, a local news aggregator that I bootstrapped (entrepreneur speak for self-financed). NowSpots got me into YCombinator, the prestigious startup incubator, in 2011, and I raised a $1 million seed round for it. But it became clear that the market for ad design products was tiny compared to the market for online ad buying. As we learned how advertisers buy online ads, it became clear that ad retargeting—in which you show ads to people who recently visited your website—was where marketeering dollars were going, and that there was a need for an easy-to-use software solution.

When I shared the news, the team stared blankly at me, unsure if it was a good thing or a bad thing. 

Fast-forward to November 2013. My co-founder, Jordan Buller, and I had a 14-person team, more than 5,000 customers, and a business seeing double-digit growth month over month. We’d managed to use what remained of our $1 million seed round to build a legit company. Enter Marin.

The acquisition process started in late November, when we received an email from someone at Marin asking to discuss a strategic partnership. When founders are starting out, partnership inquiries sound really exciting. In theory, a successful partnership with a larger company could help your company get more customers. What you realize, though, is that partnerships are rarely a real thing. When you work with another company, either they are your customer or you are their customer. Anything other than that usually just eats up time and energy. 

After years of pointless meetings, if something sounded promising, I’d take a call, but just one. So I took the call with Marin. That led to an invitation to demo our ad retargeting software to execs at Marin. Over the last two years I’d given hundreds of product demos to small groups of potential customers. This time, though, when I walked into the room, nearly 20 execs were there to hear it. A 30-minute demo turned into a two-hour discussion. We were impressed.

The next week Jordan and I were invited to demo for a smaller group, including the Marin CEO. We were asked if we’d be interested in discussing an acquisition. We said we’d be open to offers but weren’t looking to sell the company. Given how well the business was doing, we just had no need to sell.

The first offer came a week later. The three months that followed were a blur of negotiations, heated exchanges, asks, counters, and conferring with investors.

Eventually we agreed on terms and signed what’s known as a term sheet. A term sheet is like an agreement to agree on something. By itself it’s more or less meaningless, but in the Valley, they are sacred indicators of intent. Once a term sheet is signed, a deal is going down …

… unless something horrible happens during due diligence, which began soon after. Marin’s team sent over a list of hundreds of technical, legal, and business questions that we’d need to answer for the deal to go through. What type of database had we used? Had we used any software we didn’t have the rights for? Did we have IP assignments from every contractor who touched our code? How did our billing system work? How did we make money? How much money were we making?

Tracking down document after document was tedious beyond compare. And during this time, we had to keep the whole thing a secret from our employees. This meant that Jordan and I were effectively leading double lives for two months. The company kept growing revenues each month, but the stress was killing us.

Eventually Marin’s team was satisfied with our company and we were satisfied with the terms of the deal and their plans post-acquisition. That meant it was time to get the thing closed.

It turns out that closing is actually really hard to do. For starters, we had about a dozen investors who needed to sign off on the deal. Each of them had questions and concerns about various deal terms. We had both former and current employees we needed to run the deal by. Unlike a lot of other startups that give options to their employees, we’d given them actual shares, which made them all shareholders and required us to get their consent to the deal.

Telling the other 12 people on our team about the deal was itself a challenge. We wanted to tell everyone in person, but three employees work in Chicago, and one is based in Raleigh, North Carolina. So we needed to fly people to San Francisco on zero notice. Meanwhile, other employees had left on vacation or planned to work from home. Eventually we got everyone together in one room, for the first time in the company’s history.

When I shared the news, the team stared blankly at me, unsure if it was a good thing or a bad thing. My co-founder popped open a bottle of champagne and started pouring. We answered questions for an hour, and with each answer, it became clear to the team this was actually a really terrific outcome. The financial case was straightforward: Having only raised $1 million in funding, the vast majority of the deal proceeds would go to employees. Also, a significant piece of the deal—more than 10 percent—had been set aside for restricted stock for them. They were getting significant windfalls and in some cases had worked for us for less than a year. Getting to tell our employees—people who took big risks to join our little company—that their decision had earned them a big chunk of cash and stock was the best part of the process.

After the deal closed, I announced it on our company blog and with an email to customers. This wasn’t really necessary because TechCrunch and a host of other startup blogs had already gone live with the story 90 minutes earlier. The press coverage was straightforward, but the reactions on Twitter and sites like Hacker News were surreal. Total strangers on the Internet were speculating on why we sold, how much we might have made, and what our revenues might have looked like. Our company’s biggest news day came on the day it ceased to exist as a legal entity! These days, most startups “exit” in a blaze of clichés, never to be heard from again. These “acquihires” are acquisitions in name only. I wanted to make it clear that we’re not going anywhere.

I wish I could say I felt elated, that I experienced the thrill of victory, or started hatching elaborate plans to spend my returns from the deal. But in those moments right after the deal closed, I was just too tired. I just wanted to take a nap and then get back to work and normalcy. Intellectually, I know this is a life-changing event. The financial rewards are great, and if I ever want to start another company, every piece of that process will be easier. But it’s going to take awhile for all that to sink in. For now I’m just glad to be done talking to lawyers three times a day and excited to return to solving business problems.

One thing that did cut through the exhaustion was a task I’d been anticipating for more than six years: writing the Facebook post in which I announce to friends, former friends, frenemies, ex-girlfriends, college roommates, future wives, and family members that I was not in fact an obscure failure but a new, minor footnote in the annals of Silicon Valley startup successes. 

Writing it was easy. I’d had six years to plot it in my head. I kept it simple and tried to strike the right mix between “Aw yeah!” and “Aw shucks!” No one likes a sore winner. I pushed it live and watched as over 400 comments rolled in. Meanwhile my phone buzzed across my desk as it received text messages from people I’d not heard from in years. The middle school crush. The Sunday school teacher. The startup friends from Chicago. At last!

HippyVM

$
0
0

Relative speedup vs PHP, lower is better, normalized to Zend PHP 5.5.0

HippyVM is an implementation of the PHP language using PyPy technology. It started off as a research project by Maciej Fijałkowski for Facebook, and was later expanded. As of now, it contains a reasonably fast and complete implementation of core the PHP language, including implementation of many PHP's builtin modules (though not all, yet). At present it does not include a reasonable web server integration, so it's not usable for use in production in the current form.

HippyVM is being developed by Baroque Software, experts in the area of virtual machines.

How to write an iOS app purely in C

$
0
0

Damn, it took me a while but I got it:

main.c:

#include <CoreFoundation/CoreFoundation.h>

#include <objc/runtime.h>
#include <objc/message.h>

// This is a hack. because we are writing in C, we cannot out and include 
// <UIKit/UIKit.h>, as that uses Objective-C constructs.
// however, neither can we give the full function declaration, like this:
// int UIApplicationMain (int argc, char *argv[], NSString *principalClassName, NSString *delegateClassName);
// So, we rely on the fact that for both the i386 & ARM architectures, 
// the registers for parameters passed in remain the same whether or not 
// you are using VA_ARGS. This is actually the basis of the objective-c 
// runtime (objc_msgSend), so we are probably fine here,  this would be
// the last thing I would expect to break.
extern int UIApplicationMain(int, ...);

// Entry point of the application. If you don't know what this is by now, 
// then you probably shouldn't be reading the rest of this post.
int main(int argc, char *argv[])
{
    // Create an @autoreleasepool, using the old-stye API. 
    // Note that while NSAutoreleasePool IS deprecated, it still exists 
    // in the APIs for a reason, and we leverage that here. In a perfect 
    // world we wouldn't have to worry about this, but, remember, this is C.
    id autoreleasePool = objc_msgSend(objc_msgSend(objc_getClass("NSAutoreleasePool"), sel_registerName("alloc")), sel_registerName("init"));

    // Notice the use of CFSTR here. We cannot use an objective-c string 
    // literal @"someStr", as that would be using objective-c, obviously.
    UIApplicationMain(argc, argv, nil, CFSTR("AppDelegate"));

    objc_msgSend(autoreleasePool, sel_registerName("drain"));
}

AppDelegate.c:

#import <objc/runtime.h>
#import <objc/message.h>

// This is equivalent to creating a @class with one public variable named 'window'.
struct AppDel
{
    Class isa;

    id window;
};

// This is a strong reference to the class of the AppDelegate 
// (same as [AppDelegate class])
Class AppDelClass;

// this is the entry point of the application, same as -application:didFinishLaunchingWithOptions:
// note the fact that we use `void *` for the 'application' and 'options' fields, as we need no reference to them for this to work. A generic id would suffice here as well.
BOOL AppDel_didFinishLaunching(struct AppDel *self, SEL _cmd, void *application, void *options)
{
    // we +alloc and -initWithFrame: our window here, so that we can have it show on screen (eventually).
    // this entire method is the objc-runtime based version of the standard View-Based application's launch code, so nothing here really should surprise you.
    // one thing important to note, though is that we use `sel_getUid()` instead of @selector().
    // this is because @selector is an objc language construct, and the application would not have been created in C if I used @selector.
    self->window = objc_msgSend(objc_getClass("UIWindow"), sel_getUid("alloc"));
    self->window = objc_msgSend(self->window, sel_getUid("initWithFrame:"), (struct CGRect) { 0, 0, 320, 480 });

    // here, we are creating our view controller, and our view. note the use of objc_getClass, because we cannot reference UIViewController directly in C.
    id viewController = objc_msgSend(objc_msgSend(objc_getClass("UIViewController"), sel_getUid("alloc")), sel_getUid("init"));

    // creating our custom view class, there really isn't too much 
    // to say here other than we are hard-coding the screen's bounds, 
    // because returning a struct from a `objc_msgSend()` (via 
    // [[UIScreen mainScreen] bounds]) requires a different function call
    // and is finicky at best.
    id view = objc_msgSend(objc_msgSend(objc_getClass("View"), sel_getUid("alloc")), sel_getUid("initWithFrame:"), (struct CGRect) { 0, 0, 320, 480 });

    // here we simply add the view to the view controller, and add the viewController to the window.
    objc_msgSend(objc_msgSend(viewController, sel_getUid("view")), sel_getUid("addSubview:"), view);
    objc_msgSend(self->window, sel_getUid("setRootViewController:"), viewController);

    // finally, we display the window on-screen.
    objc_msgSend(self->window, sel_getUid("makeKeyAndVisible"));

    return YES;
}

// note the use of the gcc attribute extension (constructor). 
// Basically, this lets us run arbitrary code before program startup,
// for more information read here: http://stackoverflow.com/questions/2053029
__attribute__((constructor))
static void initAppDel()
{
    // This is objc-runtime gibberish at best. We are creating a class with the 
    // name "AppDelegate" that is a subclass of "UIResponder". Note we do not need
    // to register for the UIApplicationDelegate protocol, that really is simply for 
    // Xcode's autocomplete, we just need to implement the method and we are golden.
    AppDelClass = objc_allocateClassPair(objc_getClass("UIResponder"), "AppDelegate", 0);

    // Here, we tell the objc runtime that we have a variable named "window" of type 'id'
    class_addIvar(AppDelClass, "window", sizeof(id), 0, "@");

    // We tell the objc-runtime that we have an implementation for the method
    // -application:didFinishLaunchingWithOptions:, and link that to our custom 
    // function defined above. Notice the final parameter. This tells the runtime
    // the types of arguments received by the function.
    class_addMethod(AppDelClass, sel_getUid("application:didFinishLaunchingWithOptions:"), (IMP) AppDel_didFinishLaunching, "i@:@@");

    // Finally we tell the runtime that we have finished describing the class and 
    // we can let the rest of the application use it.
    objc_registerClassPair(AppDelClass);
}

View.c

#include <objc/runtime.h>

// This is a strong reference to the class of our custom view,
// In case we need it in the future.
Class ViewClass;

// This is a simple -drawRect implementation for our class. We could have 
// used a UILabel  or something of that sort instead, but I felt that this 
// stuck with the C-based mentality of the application.
void View_drawRect(id self, SEL _cmd, struct CGRect rect)
{
    // We are simply getting the graphics context of the current view, 
    // so we can draw to it
    CGContextRef context = UIGraphicsGetCurrentContext();

    // Then we set it's fill color to white so that we clear the background.
    // Note the cast to (CGFloat []). Otherwise, this would give a warning
    //  saying "invalid cast from type 'int' to 'CGFloat *', or 
    // 'extra elements in initializer'. Also note the assumption of RGBA.
    // If this wasn't a demo application, I would strongly recommend against this,
    // but for the most part you can be pretty sure that this is a safe move 
    // in an iOS application.
    CGContextSetFillColor(context, (CGFloat []){ 1, 1, 1, 1 });

    // here, we simply add and draw the rect to the screen
    CGContextAddRect(context, (struct CGRect) { 0, 0, 320, 480 });
    CGContextFillPath(context);

    // and we now set the drawing color to red, then add another rectangle
    // and draw to the screen
    CGContextSetFillColor(context, (CGFloat []) { 1, 0, 0, 1 });
    CGContextAddRect(context, (struct CGRect) { 10, 10, 20, 20 });
    CGContextFillPath(context);
}

// Once again we use the (constructor) attribute. generally speaking, 
// having many of these is a very bad idea, but in a small application 
// like this, it really shouldn't be that big of an issue.
__attribute__((constructor))
static void initView()
{
    // Once again, just like the app delegate, we tell the runtime to 
    // create a new class, this time a subclass of 'UIView' and named 'View'.
    ViewClass = objc_allocateClassPair(objc_getClass("UIView"), "View", 0);

    // and again, we tell the runtime to add a function called -drawRect: 
    // to our custom view. Note that there is an error in the type-specification
    // of this method, as I do not know the @encode sequence of 'CGRect' off 
    // of the top of my head. As a result, there is a chance that the rect 
    // parameter of the method may not get passed properly.
    class_addMethod(ViewClass, sel_getUid("drawRect:"), (IMP) View_drawRect, "v@:");

    // And again, we tell the runtime that this class is now valid to be used. 
    // At this point, the application should run and display the screenshot shown below.
    objc_registerClassPair(ViewClass);    
}

It's ugly, but it works.

If you would like to download this, you can get it from my dropbox here

You can get it from my GitHub repository here: https://github.com/richardjrossiii/CBasediOSApp

ScreenShot

Viewing all 737 articles
Browse latest View live