Friday, February 23, 2024

The Sinking of the Itanic: free ebook


Throughout my stint as an IT industry analyst during the 2000s, one of my significant interests was Intel's Itanium processor, a 64-bit design intended to succeed the ubiquitous 32-bit x86 family. I wrote my first big research report on Itanium and it seemed like something of an inevitability given Intel's dominance at the time.

But there were storm clouds. Intel took an approach to Itanium's design that was not wholly novel but it had never been commercially successful. The dot-com era was also drawing to a close even as Itanium's schedule slipped out. Furthermore, the initial implementation was not ready for primetime for a variety of reasons.

Especially with the benefit of hindsight, there were other problems with the way Intel and its partner, Hewlett-Packard, approached the market with Itanium as well. Itanium would ultimately fail, replaced by a much more straightforward extension to the existing x86 architecture.


This short book draws six lessons from Itanium's demise:

  • Lesson #1: It’s all about the timing, kid 
  • Lesson #2: Don’t unnecessarily proliferate risk
  • Lesson #3: Don’t fight the last war
  • Lesson #4: The road to hell is paved with critical dependencies
  • Lesson #5: Your brand may not give you a pass
  • Lesson #6: Some animals can’t be more equal than others

While Itanium is the study point for this book, many of the lessons are applicable to many other projects. 

Download your free PDF ebook today.


Wednesday, April 26, 2023

Changes are afoot

Things have been quiet around here especially since I shut down my latest podcast so didn't need this site to post transcripts any longer. I also toyed a bit pre-pandemic with setting up an explicitly travel, food, and experiences site. That didn't end up quite coming together and then the pandemic hit. I also had other outlets for my professional writing that took precedence.

But time has passed and various events have happened. So here we are.

The first is that I have a new (mostly) professional/business web site that will be focused on matters open source, platforms, and other topics of a tech nature that capture my fancy. That's not to say it will be all deadly serious and sober but it will be mostly focused on providing tech events/trends analysis and commentary.  

At the same time, I'm spinning this site back up to provide more of an ongoing stream of personal commentary whether it's travel or ephemera that may be tech adjacent.

We'll see how it goes.


Monday, January 17, 2022

Hardware data security with John Shegerian

John Shegerian is the co-founder and executive chairman of recycling firm ERI and the author of The Insecurity of Everything. In this podcast, we talk about both the sustainability aspects of electronic waste and the increasing issue of the security risk associated with sensitive data stored on products that are no longer in use. Shegerian argues that CISOs and others responsible for the security and risk mitigation at companies have historically been mostly focused on application security. 

However, today, hardware data security is very important as well. He cites one study where almost 42% of 159 hard drives purchased online still held sensitive data. And the problem extends beyond hard drives and USB sticks to a wide range of devices such as copiers that store data that has passed through them.

Shegerian details some of the steps that companies (and individuals) can take to reduce the waste they send to landfills and to prevent their data from falling into the wrong hands. 

Fill out this form for a free copy of The Insecurity of Everythinghttps://eridirect.com/insecurity-of-everything-book/

Listen to the podcast [MP3 - 26:55]

Tuesday, January 04, 2022

RackN CEO Rob Hirschfeld on managing operational complexity

There's a lot of complexity, both necessary and unnecessary, in the environments where we deploy our software. The open source development model has proven to be a powerful tool for software development. How can we help people better collaborate in the open around operations? How can we create a virtuous cycle for operations?

Rob and I talked about these and other topics before the holidays. We also covered related topics including the skills shortage, complexity of the software supply chain, and building infrastructure automation pipelines.

Related links:

Listen to the podcast [30:39 - MP3]

[TRANSCRIPT]

Gordon Haff:  I'm here today with Rob Hirschfeld, the co founder and CEO of RackN, on our just before the holidays discussion. Our focus today is going to be on managing complexity.

The reason this is an interesting question for me is, we seem to be getting to this stage where open source on the one hand gives you the ability to get under the hood and play with code, and to run it wherever you want to.

But we seem to be getting to the point though where people are saying, that's really hard to do though, so maybe we should just put everything on a software as a service or on Amazon Web Services (which I think is actually down at the moment) but purports to solve that complexity problem. 

Welcome, Rob.

Rob Hirschfeld:  I'm excited to be here. This complexity tsunami that people are feeling is definitely top of mind to me because it feels like we're reaching a point where the complexity of the systems we've built is sort of unsustainable, even to the point where I've been describing it as the Jevons Paradox of Complexity. It's a big deal.

I do think it's worth saying up front, complexity is not bad in itself. We have a tendency to be like, "Simplify, simplify. Get rid of all the complexity." It's not that complexity is bad or avoidable. It's actually management like you stated right at the start. It's a managing complexity problem, not a eliminating complexity problem.

Gordon:  To a couple of your points to managing complexity, I mentioned we'll just use a software as a service. Using a software as a service may be just fine.

At Red Hat, we don't run our own email servers any longer. We used to. We use software as a service for email and documents, which obvious of course causes this little tension, but shouldn't we be doing everything in open source?

The reality is that with modern businesses, you have to decide where to focus your energy. Way back when Nick Carr at Harvard Business Review wrote an article, basically, "Does IT Matter?" Nick, I think, particularly the way we view things today, perhaps deliberately overstated his case, but he was absolutely right that you have to pick and choose which IT you got to focus on and where you differentiate.

Rob:  I think that that's critical and we've been talking a lot in the last two years about supply chains. It is very much a supply chain question. Red Hat distributes the operating system as a thing and there's a lot of complexity between the code and your use of it that Red Hat is taking care of for you.

That's useful stuff to take out of your workflow and your process. Then one of the challenges I've had with the SaaSification piece here and I think we've seen it with outages lately is that there is a huge degree of trust in how the SaaS is running that business for you and their operational capability and what they're doing behind the scenes.
The Amazon outage, the really big one early in December exposed that a lot of the SaaSs that depended on Amazon had outages because Amazon was out. So you don't just have the SaaS and delegate to the SaaS.
I've been asking a question of how much you need to pierce that veil of… do you care about where the SaaS is running and how the SaaS is operating and how the SaaS is protecting what you're doing because you might have exposure that you happily are ignoring based on employing a SaaS.

That could come back to bite you or you could be operationally responsible for any work.

Gordon:  Of course, if you are a customer of that SaaS, you don't care if the SaaS is saying, "But, but...It's not our fault. It's Amazon's fault." That's not a very satisfactory answer for a customer.

Rob:  It could be if you've acknowledged the risk. People were talking about some of these outages as business snow days where everybody is down and you can't do anything. Some businesses have the luxury for that, but not very many want their email systems to be down or their internal communication systems to be down.

Those are business critical systems or their order entry, order taking or the delivery systems and those outages take a lot to recover from.

I think that, and if somebody is listening to this with a careful ear, they're like, "But if I was doing it myself, it could go down just as easily," and that's entirely true and this is a complexity balance thing.
It's not like your email system is.. you're going to do a better job managing it than a service provider is doing. They arguably have better teams and they focus on doing it, it's the main thing they do, but they actually do it in a much more complex way than you might.

You might be able to run a service or a system in the backend for yourself in a much simpler way, than Amazon would do it for you. They might have the capability to absorb that difference, but we're starting to see that they might not.

Gordon:  I want to touch on another element of supply chain that it's very much in my ballpark, in open source, and that is the software supply chain. One of the things that we've been seeing recently, and in fact, it was a [US Federal] executive order earlier this year that related to this, among other things.

That software out there, open source or non open source, 90 percent of it came from somewhere else. That somewhere else might include somebody does this as a hobby in their spare time in their basement. There was a lot of publicity around this with the Heartbleed exploit a few years ago.

I think some of those low hanging fruit have been cleared off, but at the same time they...

Rob:  We're talking about Log4j is dominating the news cycle right now, and that's maintained by a couple of volunteers, because we thought it was static stable code. No. It is a challenge, no matter which way you go. I think there's two places that you and I both want to talk about with this.

Part of it is, that open source aspect and how the community deals with it. Also, the assumption of we're going to keep finding vulnerabilities and patches and errors in everything and the user's responsibility to be able to patch and update from that perspective, which is also part of open source.

Gordon:  Yeah. Actually to the point of the user responsibility, we ran a survey fairly recently, it's something we do every year, our Global Tech Outlook survey. One of the questions that we asked there, a lot of questions are around funding priorities.

As you could expect, security was...Well, at least ostensibly a funding priority, sometimes a little bit uncertain about how to take things. That, oh yes, we know there should be a funding priority, whether or not it actually is, but anyway, we asked about what the security funding priorities were underneath that.
The top were things you'd expect. Classic security stuff like network security, so presumably firewalls and things like that. The very bottom, though, we were just talking about supply chains. The very bottom was essentially the software supply chain. This is after Joe Biden putting out the executive order and everything.

I don't know quite how to take that one. One interpretation says, the message hasn't really gotten out there yet. I don't know to what degree I believe that. The other way to take it is that, yeah, this is important, but Red Hat's taking care of that for us.

Even though, we are using all of this open source code in our own software. Then I think the third area may be that, yes, this is a priority, but we don't think this is very expensive to fix.

Rob:  I think that the security budget is very high for magic wands and the availability of magic wands is very limited. [laughs] What you're describing at the bottom of the stack of the software supply chain, is part of what I see the complexity problem being.

We have to step back and say, "How do companies cope with complexity?" [laughs] The number one way they cope with complexity is to ignore it.

It's like, I'm going to send it to a SaaS and pretend they're going to be up all the time, or I'm going to use this library and pretend that it's perfect and flawless. Everything's great.

I agree with you. Red Hat with the distro effectively is doing some of that work for you, and you're buying the, "OK, somebody has eyes on this for me." That assumption is maybe marginally more tested. 

I think when we start looking at these systems, we need to think through, OK, software is composed of other components and those other components have supply chains. Those components have components.
Before we got into the containers, we used to do RPM, install whatever it is, keep up whatever it is. We had to resolve all those dependency graphs dynamically at the moment and it was incredibly hard to do. Software was very fragile from that perspective.

A lot of people avoided patching, changing, or updating because they couldn't afford to resolve that dependency graph. They ignored it.

Docker, let us make that immutable and put it all into a container and do a build time resolution for it. Which I think is amazing, but it still means that you have to be thinking through at least at some point when you're pulling all those things in.

I don't think people think of containers as solving the complexity problem of the application dependency graph, I do. It's one of those ways that you can very consciously come in and say, "We're going to manage this very complex thing in the process." It's a complex thing if it's fragile.

Part of managing complexity is being able to say, "Where do I have hidden complexity lurking for what I do?" If you have something that's fragile and hard to repeat, or requires a lot of interventions to fix, you've identified a part of your process that that is complex, or maybe, dangerously complex from that perspective.

Gordon:  Some things inherently have a degree of complexity. Again in most cases, there's probably some magic SaaS things out there for some very small problems but by and large, you're still not eliminating complexity there.

I think the other related problem that we're seeing right now too, and again, from our Global Tech Outlook survey is a big problem, was training people skills. There seems to be a real shortage of, it's hard to hire people. You're CEO of a company, I'm sure you're exposed to this all the time.

Rob:  We are, and our customers are. Our number one challenges with any go to market with our customers is the life cycle of the employees at the customer site. We have customers where they have a reorg and a management change, or they lose the lead on a solution, and we have to reset and retrain.

It sets schedules back, let alone hiring our own people and getting them up to speed to train. There's a huge bug with this. I don't think companies are particularly watching for the idea of how the work that their people do, or adding to the complexity and increasing the risk of what they do.

A lot of times, people inherently absorb complexity risk by doing the work. We do DevOps and automation work. Our job is to create standard repeatable automation, Infrastructure as Code.

The tools that are out there today, people use them. They use them in ways that they think are right and work for them. They don't have a lot of repeatable practice that each team follows or can be replicated across the organization and so you get into the training.

This is where I'm trying to go with the skills training piece. Skills training is partially, "How do I use the tools and the infrastructure and the automation?" Part of it is, "How do I figure out what the person before me did, or the person next to me is doing so that we can connect those pieces together?"

We spend a lot of time and add a lot of complexity when we don't take time to understand. This is a development practice, "How do I get code that I don't have to maintain, that I don't have to understand?" That actually is another way to reduce complexity with this. Does that make sense?

I think about Infrastructure as Code, which is a new way of thinking about DevOps and automation. Today, a lot of people write their own automation and it's very hard to reuse or share or take advantage of standard practice. But we do in development. In development we do a good job with that.

You can say, "All right, this library or this component is standard, and I'm going to bring it in. I'm not going to modify it, I don't need to. I'm going to use it the way it's written." That approach reduces the complexity of the systems that we're talking about quite a bit.

Gordon:  I think one of the things that we're seeing, right now I'm working on something called Operate First, and our idea here is to basically form a community around essentially an open source approach to large scale operations.

It's still pretty early on. Operations have been divorced from open source, and frankly, DevOps notwithstanding from a lot of the development effort, and practices that have gone there.

Rob:  I strongly agree with you. It's one of those things because open source communities are amazing at building infrastructure, or building code, building practice.

It's been a struggle to me to try and figure out how to help people collaborate about operations. What I used to think of back in my OpenStack days as glass house operations. You can't expose everything because they're secrets and credentials and things like that.

In building this next generation of automation, infrastructure, pipelines generation, we need to figure out how do we have more collaboration? How do we have people share stuff that works? How do people build on each other's work, and this is the heart, and recognize that operations have a lot of complexity in them.
There's a difference. You can't turn around and say, "Hey, you're going to operate this exactly like I do and it's going to be simpler for you because you're following me"

We saw this. Early Kubernetes days, there were some people who wrote an Amazon installer, called it as kOps. This is the one I'm thinking of specifically, K O P S. Literally, if you set it up exactly as is, it would install Kubernetes on Amazon very reliably because Amazon was consistent. [laughs]

From there, it fell apart when people were like, "Well, wait a second. My Amazon's different then your Amazon. I want to do...these other clouds are totally different than that. They don't have anything that works like that."

What we see is, the collaborating around an operational strategy gets really hard as soon as the details of the operational system hit. To the point where people can't collaborate on the ops stuff at all.

Gordon:  Yeah, it's definitely challenging. That's one of the things we're still trying to work out as part of this. Speaking of OpenStack, we're working closely with OpenInfra Labs on this project. It's definitely challenging. I think it's something that we need to get to though. There are tools out there. You mentioned Terraform I think when we were discussing this, for example.

Rob:  This is for us trying to...I like where you're going because open source to me is about collaboration. Fundamentally, we're trying to work together so that we're not repeating work that we're able to build things together.

When I look at something like Terraform which is very focused on provisioning against a YAML file. The providers are reusable, but the Terraform plans aren't. There's two levels of collaboration here. There's collaboration in the industry which we want to have, and there's also just collaboration inside of your organization in your teams.

That's one of the things I always appreciated with open source and the open source revolution where we really started focusing on Git, and code reviews, and pull requests and that process which has really become standardized in industry which is amazing.

People forget how much that's an open source creation. The need to do all these code reviews and merges. We need to figure out how to apply that into the Infrastructure as a Code conversations.

I've seen a lot of people get excited about GitOps. To me, GitOps that's not really the same thing. It's not as Infrastructure as a Code as you're building a CI/CD pipeline for your automation, or being able to have teams reuse parts of that automation stack or even better delegate parts of that delegation, or that stack so that the Ops team can focus on something that the Dev team can focus on something just like a CI/CD pipeline, would put those pieces together.

That's right, that sharing of concerns is really, I think, part of what we're talking about in the open source collaboration model.

Gordon:  Yeah, what a very powerful thing. I don't think people really thought about open source in this way at the beginning, but it's really come to be to a large degree about the open source development model, creating, in many cases, enterprise software using an open source development model.

We don't have an open source operations model with that same virtuous circle of projects, products, profits, feeding back into the original community; that's stolen from Jim Zemlin from the Linux Foundation.

I think you want to create that same virtuous cycle around operations. In fact I think it was at the Linux Foundation member summit. There was a very elaborate version of that virtuous cycle for open source development, and operations was not mentioned once in that entire slide.

Rob:  This, to me, is where it's why we focus on trying to create that virtuous cycle around operations. It really does come back to thinking through the complexity cycle. When we think about how do you prime the pump and have, people even within an organization sharing operational components, and let alone feeding it back into a community and having them be able to take advantage of that.

I should be specific. Ansible has this huge library, the galaxy of all these playbooks, but it has a copy. In some cases, hundreds of copies of the same playbook. [laughs] Because the difference is    this is where the complexity comes in    between one person's thing and another person's thing are enough to break it.

That complexity, and sometimes, I think, people don't want to invest in solving the complexity problem to make that work. You have to be willing to say, it's simpler for me to ignore all that stuff, and write my own custom playbook or Terraform template or something like that. From us building this virtuous cycle, you've already broken the cycle as soon as you do that.

You have to look at not eliminating the complexity but managing it. Defining, for us, we spend a lot of time with definable parameters.

When we put something through an infrastructure pipeline, the parameters are actually defined and immutable    part of that input    because that creates a system that lets things move forward and can be discovered. I think that this is where you sit back, and you're like, "OK, the fastest, most expedient thing for me in Ops might be not managing the complexity that somebody else could pick up."

Gordon:  Yeah, I think there's a certain aspect of tradition. The way things are often done in Ops, is you've traditionally hack together a script that solved a particular problem and went on with the rest of your day rather than sitting down and generalizing and working with other people.

I'm sure there's exceptions to that at some of the large scale companies out there. It definitely is historically, how things tended to happen.

Rob:  There's two things that we see driving this. One of them is not accepting that heterogeneity is part of life. When you look at a script, you're like, "OK, I can eliminate a whole bunch of code generally seen as a good thing."

By ignoring all these edge cases, that somebody else injected in there. Those things exist for a reason. Even if you don't care about the server, the vendor, or the Cloud that you're dealing with, it's smart to try and figure out how that works so that you can keep it in because that keeps the code better from that perspective.

Then there's another piece to it, where, as we connect these pieces together, we actually need to be aware that they have to hand off and connect to things. The other mistake I've seen people make in a complexity management perspective is assuming their task is the only task.

I see this a lot when we look at end to end provisioning flows, is that like configuration tasks are different than provisioning tasks, which are different than monitoring tasks which are different than orchestration. You have to understand that they're usually intermixed in building a system well, but they are different operational components.

You might have to make something that's less efficient or harder, or has more edge cases, in one case, to do a better job interacting with the next thing down. I'm thinking through how this translates in open source more generally. Do you want to add something to that?

Gordon:  No, I think one of the things we've struggled with around Operate First and similar type of work is turning this into what we can concretely do going forward. I'm curious maybe in closing out what some of your thoughts are. What are some main next steps for the industry for operations, communities over the next year or so?

Rob:  We've gotten really excited about this generalized idea of an infrastructure pipeline because it's letting us talk about Infrastructure as a Code beyond the get an YAML discussion and talk about how do we connect together all of these operations.

When we think about collaboration here, what we're really looking at is getting people out of the individual step of that pipeline conversation and start thinking about how things connect together in the pipeline.

It's a nice analogy back to the CI/CD revolution from five or six years ago, where people would be like, 
"Oh, CI/CD pipelines are too hard for me to build, it's all this stuff, and I'm gonna deploy 100 times a day like they do in the..." At the end of the day, you don't have to look at it that way at first.

The goal is to be doing daily deployments, and every commit goes to production and things like that. The first thing you just need to do is start connecting two adjacent pieces together in a repeatable way. That means a lot of times to teams collaborating together, or it means, you being able to abstract out the difference between different Cloud types or hardware types or operating system types.

That's what I hope people start thinking about is, how do we connect together a couple of links in this infrastructure pipeline chain.

The reason why I'm so focused on that, is because if you can connect those links together, you've actually managed the complexity of the system. In some cases, what you've done is you've made it so that the tools that you're using focus on doing what they do well.

One of the things in ops that I find really trips people out is when they use a tool outside of its scope. [laughs] Using a tool for something that it's not designed for, often causes a lot of complexity. This is right. You can do it.

Sometimes that adds a lot of complexity because the tool wasn't designed for that and you're going to run it into a fragile state or you're going to have to do weird things to it. This lets you say, "All right, this tool works really well. At this stage, I hand off to the next stage, I hand off to the next stage."

It's more complex, maybe to build a pipeline that does that. Individual components of the pipeline are actually then simpler. The connections between things now that they're exposed have also reduced the complexity, the complexity budget in your system, by working on those interconnects.

The way I think about that is a coupling perspective and thinking about coupling.

Gordon:  Great. That's probably a good point to end on. Anything you'd like to add?

Rob:  No, this was fantastic. It's exactly the topic that I've been hoping to have around managing complexity and thinking about it from a community perspective. I appreciate you opening up the mic so that we could talk about it. It's a really important topic.

Gordon:  Thanks, Rob. Enjoy the holidays.

Rob:  Thanks, Gordon. It's been a pleasure.









Monday, December 20, 2021

RISC-V with CTO Mark Himelstein


RISC-V is an open instruction set architecture that's growing rapidly in popularity. (An estimated two billion RISC-V cores have shipped for profit to date.) In this podcast, I sat down with Mark Himelstein, the CTO of RISC-V International, to talk about all things RISC-V including its adoption, how it's different from past open hardware projects, how to think about extensibility and compatibility, and what comes next.

Listen to the podcast [MP3 - 22:54]

[TRANSCRIPT]

Gordon Haff: I'm very pleased to have with me today Mark Himelstein, who's the CTO of RISC-V, who just got off having a summit in San Francisco that I was pleased to be able to attend in person.

Welcome Mark. Maybe you could just introduce yourself and maybe give a brief overview of what RISC-V is.

Mark Himelstein: I'm Mark Himelstein. I'm the CTO. I've been in the industry for a bit. I was an early employee of MIPS, I ran Solaris for Sun. I've done a lot of high-tech stuff, and I've been with RISC-V for about a year and a half. Very excited. This was an incredible year for us, a very big change for us.

First of all, we believe that there's been well over 2 billion RISC-V cores deployed for profit this year which is an important thing. Success begets success and adoption begets adoption.

A lot of people joined us early on and they're early adopters, and now, you're seeing people say, "Oh, they're successful now. I can be successful."

RISC-V is an instruction set architecture kind of halfway between a standard and open source Linux, kind of right in between there. We don't do implementations. We're totally implementation-independent. We work with other sister organizations that are nonprofit like lowRISC, and CHIPS Alliance, and Open Hardware who do specific things in hardware with RISC-V.

We just really work on the ISA -- the instruction set architecture -- and we work on fostering the software ecosystem. All compilers, runtimes, operating systems, hypervisors, boot loaders, etc., all the things that are necessary for members to be successful with RISC-V.

It's a community of probably about 300 institutions and corporations. There's probably over 2,000 individual members, somewhere around 55 groups, doing technical work, about 300 active members in those groups and about 50 leaders.

They just did an incredible job this year ratifying 16 specifications. In 2020, we did one, so a very big growth for us. A lot of things that have been hanging out there for some time, four to six years, things like Vector, Scalar Crypto, very innovative things as well as some some basic stuff like hypervisor and bit manipulation.

We finally got the standard out, so everybody's grateful for that.

Gordon: I want to talk about standards a little bit more in a moment. You mentioned this open ISA. What was the thinking behind taking this approach? Because obviously, there have been earlier, open hardware or semi-open hardware types of projects, which haven't necessarily had a big impact, or at least not as big an impact as maybe some people had hoped they would have at the time.

How is RISC-V different?

Mark: Yeah, it's a really good question. One of the problems when you hand something whole cloth as open source, is it's hard for people to really feel ownership around it. The one thing that Linux did was everybody felt a pride of ownership. That was really hard to do.

We are the biggest open source ISA that was born in open source. Unlike the other ones, we were actually born in open source. People are afraid that if one of these big corporations goes away, that's behind them, then the open source will go away, the actual standard will go away. Rightfully so, we've seen that occur before in the past.

RISC-V comes along, and it's different. Krste Asanović at Berkeley wanted to do some stuff. The story was, he was wanting to do some vector work and Dave Patterson had done RISC I, II, III, IV. They came up with this RISC-V, and with a V

V doubles as RISC-V and vector, and start off doing this. All of a sudden, there's this groundswell of people who are interested in it. It got so exciting for folks that in 2015, they started plotting how to make it an open source organization, and they did in 2016. It's just taken off from there. People have been dying for this.

It's very clear. There's flexibility with respect to pricing. It's free. More importantly, it's also flexibility with respect to customize. You can do anything you want with it, nobody's standing over your shoulder.

We provide places for people to do custom opcodes and codings and stuff like that. It's set up for extensibility. We believe that it will last for a long time because you can extend it over and over and over again, as we did this year, we added vector, we added these other things.

It's extensible. It's free. It's flexible to use any way you want to. We've also had a renaissance in EDA over the last 15 years.

It's a lot easier to pump down a bit of logic to go off and do, hey, some security module using a RISC-V core, where it may have been harder to do that around the year 2000. That's gotten easier. This combination of things has been incredible.

You see adoption and you see deployment of products more in the IoT embedded space because the runway is shorter. It's not a general-purpose computer. You're running one application, you get it working.

Wearables and industrial controllers and disk drives and accelerators that go into data center service for AI and ML graphics. All those things, you're seeing them first. Then, the general-purpose computers come out a little bit later.

Accepting there's always exceptions, Alibaba announced at the summit last year that they have a cloud server based on they have their next-generation coming out.

You see RISC-V in every single part of computer science, from embedded to IoT to Edge to desktop to data center to HPC. I even have a soldering iron made by pine64.org that has a RISC-V processor.

Gordon: To this point about extensibility, there was a fair bit of discussion at the RISC-V Summit over, centrally, fragmentation versus diversity. This idea that you have all these extensions out there, but if people use them willy-nilly, then you're breaking compatibility.

I know there are some specific things like profiles and platforms that are intended to address that potential issue to some degree. Could you discuss this whole thing?

Mark: Yeah. I have a bumper sticker statement that says, "Innovate. Don't duplicate." That's the only thing that keeps us together as a community. Why do you want to go ahead and implement addition and subtraction for the thousandth time? Why do you want to implement the optimizers for addition and subtraction the thousandth time? You don't.

The reason why so many people are coming to the table as part of the community with a contributor culture that was built by Linux.

Why are they showing up? Why are they doing work? They're doing it because they realize they don't want to do it all. It's too expensive to do it all. There are many, either countries or companies or whatever, that were doing unique processors themselves because the licenses or the flexibility were available in other architectures.

They don't want to do their own stuff. The same reason why people didn't want to get hooked into Solaris or AIX. All those things that are going to Linux have gone to Linux.

Is the same reason why the coders in RISC-V, they don't want to be beholden to our company, they want the flexibility and the freedom to prosecute their business the best way that they see fit, and we allow them to do that.

Now. they want to share, how are we going to have them share? We have the same thing that shows up with something like Linux, in that we have to make sure that there are versions that work together.

We've done the same thing, on the same way that you have generational sets of instructions that work together, either by a version number or a new product name or a year.

We have the same thing with us, with profiles. RVA is the application profile, RVM is the microcontroller bare-metal profile. They'll both be coming out almost every year, initially, and probably slower as time goes on.

RVA 20 is the stuff that was ratified in 2019. RVA 22 is the stuff that was ratified in 2021. It works for all applications. We can tell the distros, we can tell the upstream projects like the compilers, GCC, LLVM, this is what you go after.

Everybody knows, all the members know. If they're going to do something unique and different, they have to support that themselves. If they want to negotiate with the upstream projects, we don't get in the way, they can go ahead and do that.

The upstream projects know the profiles that are most important. The platforms are very similar, but for operating systems. We want to show it to be able to create a single distro, a single set of bits, people download and configure and work. Things like ABI's, things like Discovery, things like ACPI, all those things are found in the platform.

The same thing will happen, it will come out on a yearly basis. There's, again, an application layer platform, and there's a microcontroller for real-time OSs and bare-bones things. As you might imagine, the bare bones both in the profiles and platform, very sparse.

There's not much in there, because people don't want you to do a whole lot to the point where we had the M extension previously, and that M extension had multiply and divide. They don't divide. It's too expensive in IoT, so we're breaking it down.

We're going to have a separate multiply extension that people can go ahead and use. Both of them are optional down the bottom. We've provided a way that all the upstream things can go ahead and deal with it, all the distros can deal with it. Then, people can jump on board and use those things.

Ultimately, the goal is simple, be able to take the application that was compiled for one implementation and have it run on another implementation, have them produce the same results within the bounds of things like timing and other things like that.

Same thing as operating systems, one set of bits will be able to download multiple implementations, configure it, and have it work. That's how we're working on constricting fragmentation and giving you a tool to be able to do it. Again, the only reason for people who want to fragment is so that they can share.

Gordon: It was Dave Patterson who made a comment in "Meet the board" before the RISC-V Summit for a lot of uses. You alluded to this with IoT devices. The certain microprocessor compatibility like you've had with that x86, is often not the right lens through which to look at RISC-V. It can be, of course, but it isn't, necessarily.

Mark: Even those guys want to share things. They're not going to want to do their compiler from scratch, but they're using the base 47 instructions, instead of all the rest of the extensions. They don't care about those because of exactly what you said.

Again, the thing that brings people together are common things that they have to do over and over again. I'll give you one very simple example, working on something called Fast Interrupts right now, what does it mean? It's shortening the pathway to get into an interrupt handler, not having to save and restore all the registers for embedded.

That's what it's for. Very simple. All the embedded guys are in there, even though they're doing their own thing. They want to agree on one set of calling conventions and make it easy for them to do that.

That's not something that they're using for interoperability between their parts. That's something they're using, so they don't have to duplicate the work between the companies.

Gordon: Let me ask you a couple of related questions. The first of them is, where were the initial wins for RISC-V? A related question is, have there been wins with RISC-V that you didn't expect?

Mark: First of all, remember, we don't collect any reporting information. We don't require that somebody tell us how many cores, what they're used for, or anything like that. Anything we get is anecdotal.

The other thing is we don't announce for anybody. It's not our job to do that. We'll help amplify. We have a place on the RISC-V website for everybody to advertise for free? Whether I remember or not, called the RISC-V Exchange. All that's wonderful.

The stuff we hear is when we have side meetings at conferences, like the summit and stuff like that. We know that there's more design wins and deployments that we know of in the IoT embedded space, again, because of the runway. It's not a general-purpose computer.

One that's exciting that people may not realize, is that a lot of the earbud manufacturers, especially out of China, are using RISC-V as their core. One is called Bluetrum, now remember, probably tens of millions of units per month with RISC-V cores. That's exciting to me.

I think that again, it's one of those things where it shows off the ability to take a RISC-V core, do something with it quickly, and get it out there. I have in my house 85 WiFi connected devices with switches and outlets and doorbells and gates and garages and all that stuff. 10 percent of them are Espressif.

Espressif, again, a member. They have gone ahead and gone and produced the RISC-V. You can see the RISC-V module, home automation stuff. There's a lot of things that are showing up and a lot of places that we may not hear about right away.

We hear about secondarily, that are A, a surprise, but B, exciting, and C, what it does is it engender success. When people see other people being successful doing this, they go and say, "Hey, I can do this, too." I think that that's amazing.

Again, you're going to see this continue up the chain. There are exceptions like Alibaba doing their cloud server, the servers are a little bit further out. The HPC guys are actively working in European processor initiative, Barcelona Supercomputer Center. All those guys are working on stuff. We know that the United States government in various places is working on things.

The gentleman who runs our technology sector committee, this guy named John Liddell from Tactical Labs in Texas. He works with various government organizations, and has simple things like Jenkins rigs to do test for RISC-V and stuff like that.

There's a lot of work that goes in various areas, but I don't think there's a single part of computer science that isn't looking at RISC-V for something or another, whether it be a specialized processor to help them do security or processing for ETL, or something like that, or something that's a general-purpose thing. It's everywhere.

You're going to see more and more products come out over time. We're not the only ones who are taking a look at how much it's coming out. All the state or analysts have patent numbers, and they're predicting 50 billion to 150 billion range of cores out there in a very short period of time. It's going to grow as people see that it's an easy thing to do.

Gordon: What is your role at RISC-V? What do you see your primary mission as being?

Mark: I like to make things simple. The most important thing for me is the proliferation of RISC-V cores for profit. That has to be the thing that stays in your mind. In the short term, my goal is to get people over the goal line with the pieces they need to get over the goal line with.

In 2020, we produced one spec, in 2021 we did 16. That's through the effort of me and everybody else in the team in order to prioritize, put governance in place, get them help where they needed help, and try to push things over the goal line. Get those specs out there that the members care about in order to make their customers successful.

Then, finally, the ecosystem. Look, without compilers, without optimizers, without libraries, without hypervisors, without operating, it's just, it doesn't matter. It doesn't matter how good your ISA is. Having all those pieces there is really important.

I'm a software guy, and they hired a software guy to do this job because of that. I've worked in the NSA, but I understand software everywhere from boot loaders up to applications.

I've worked all those pieces. It's really critical, and you're going to see us provide even more emphasis over that. That's been the greatest growth area in our groups over the last year, and you're going to see continued effort by the community.

Gordon: I think you've maybe just kind of answered this, but if you look out in a year, two years, what does success look like or conversely, what would you consider to be flashing alarm lights or bells going off?

Mark: One of the things that we haven't done up until now is really put a concerted effort after industries. A lot of it has been really bottoms up, "Hey, we need an adder, right? We need multiply. We need vector, right?" Those are things we go, "Hey, other architectures have this."

Now, we're really starting to take a look from the board, to the steering committee, down through the groups at things like automotive, at things like data center, at finance, at oil and gas, at industries and trying to take a look holistically at what they need to succeed.

Some of it's going to be ISA. Some of its going to be partnering with some of these other entities out there. Some of it's going to be software ecosystem. The goal is to not peanut butter spread our efforts to a point where nobody can be successful in any industry, right?

It's important we say, "OK, you're doing automotive." All of a sudden, you have to look at ASIL and all these ISO standards, functional safety, blah, blah, blah, and we have to make sure that stuff occurs. We have a functional safety SIG by the way.

Success, to me, looks like continued deployment of cores that are sold for profit, and then starting to attack some of these industries holistically that need these pieces and make sure that all the pieces they need inside of RISC-V are there and working and completed.

Gordon: Well, thank you very much. Is there anything else you'd like to add?

Mark: Well, again, I think the biggest thing is just a big thank you to you and the rest of the community of being inquisitive and participating and joining the contributor culture, and helping make RISC-V a success. We're always looking for people to help us and join us, so look at riscv.org. If you have any questions, send mail to help@riscv.org. Thank you very much.

Gordon: Other than just going to riscv.org, are there any particular resources that somebody listening to this podcast might want to consider looking at?

Mark: If they're very tech, under the riscv.org, there's a tech tab. Underneath there, there's a tech wiki. That sends pointers to GitHub with all the specs, to the upstream projects, GCC, LLVM, our governance, all those things. It gives you a really good jumping-off point. There's a getting started guide there as well for tech guys.

In general, if you're not a member, become a member. It's really easy. If you're an individual, you can become a member for free. If you're a small corporation just starting out, we have some breaks. There's different levels of membership, strategic, premier TSC, premier. Come join us. Help us change the world. This is really different.

I had no clue what this was when I joined it. I'm very grateful, and I'm very happy to see it really is making a very big difference in the world.

Wednesday, December 01, 2021

The Open Source Security Foundation with Brian Behlendorf

The Open Source Security Foundation (OpenSSF) is a fairly new organization under the Linux Foundation focusing on open source software security with an initial primary focus on software supply chain security. Brian Behlendorf recently moved over from the Hyperledger Foundation which he headed up to take over as General Manager of OpenSSF.

I was able to grab a few minutes with Brian at the Linux Foundation Member Summit at the beginning of November. We talked about the genesis of OpenSSF, his initial priorities, how to influence behaviors around security, and what sorts of "carrots" might help developers to develop more secure software.

Some links to topics discussed in the podcast:

Listen to the podcast [MP3 - 16:26]

[Transcript in process]

Gordon Haff:  Hi, everyone. This is Gordon Haff, technology at get Red Hat. I'm pleased to be here at the Linux Foundation Member Summit in very nice Napa, California, with Brian Behlendorf, who's the newly minted general manager of the Open Source Security Foundation. 

Brian, what is the Open Source Security Foundation? What was the impetus behind creating this?

Brian:  Over a year ago, actually, just before the pandemic started [laughs] simultaneously at two different firms—at GitHub and at Google—a small group of companies got together with each starting to really think about this problem of application security and dependencies and what happens during build time and distribution.

Like all these blind spots that we have in the open source community around how code is built and deployed and makes its way to the end users. It's funny how both started simultaneously. Then, people realized it'd probably be better to combine forces on something like this. There wasn't any budget, there wasn't really any clear ownership.

The Linux Foundation stepped in, partly at the behest of these companies, and came to be a home for the informal collaboration around ideas around what really could be done here. 

Then, that group came up with: Let's focus on developer identity and signatures on releases and things. That became a working group.

Let's look at best practices and education materials: That became the best practices working group.

Six different working groups were formed and a bunch of projects underneath.

Then, some momentum started to build and a recognition that there might be some systematic ways to address these gaps in across that entire lifecycle across, code coming out of a developer's head going into an IDE, them choosing the dependencies to build upon and then all the way to distribution.

There are all these points of intervention, places where there could be improvements made. That became the Open Source Security Foundation. Then, after about a year of this mapping the landscape and figuring out what to do, it was clear that there were some places where some funding could be applied.

In the typical Linux Foundation fashion, we said, "Well, let's see who's interested in solving this problem together." There are a bunch of existing organizations, about 60 or so. Most of those and a whole bunch of new ones came together and agreed to pool some funds.

Which ended up being over $10 million to go and tackle this space. Not with the specific idea of, "We're going to build this product," or "We're going to solve this one thing," but a more general purpose like, "Let's uplift the whole of the open source ecosystem."

"Raising the bar" is one way to think of it, but "Raising the floor" is a phrase that I think I prefer better.

As a momentum got together around a proper funded entity, I had been concerned about this space for a long time, working and leading the Hyperledger initiative as executive director for that, as well as Linux Foundation Public Health, and I said "I'm happy to help on this, and I probably should," so I jumped over to lead this as executive director.

We launched that in mid October, announced the funding, and I have our first governing board meeting tomorrow, Friday.

Gordon:  Good luck with that. Not to make this too inside baseball, but Linux Foundation had had their Core Infrastructure Initiative that was kicked off, I guess, by Heartbleed and some of those problems, and it seems the focus has shifted a bit.

Brian:  The Heartbleed bug was specifically in OpenSSL. Jim Zemlin, my boss, went around and passed the hat and did raise a healthy chunk of funds to try to expand it beyond the two developers named Steve.. in their spare time, or in their consulting time, I think, to try to be a larger community. I think that had some success. I think, and we've had other initiatives like this CII Badging effort, which is now being rolled into OpenSSF, lots of focus on security in the Linux kernel efforts, so there've been these different security initiatives.

Oh, and a really big one has been SPDX, which started life initially focused on licensing and making sure that this big tarball of code I have and all the dependencies are appropriately licensed and appropriately open source, "I'm following all the rules," and that kind of thing.

Now people are realizing, "Oh, it's easy to extend this to a proper SBOM type of scenario." I can understand if I've got these versions of code, which ones are vulnerable, and really helps with the auditing and understanding not just in this tarball, but in my entire enterprise, "Where might I be vulnerable to outstanding CVEs that have been fixed by updates?" and that kind of thing.

The SPDX effort, that's now an ISO Standard, that's something rolled under this, but it's another Linux Foundation complementary effort. With this, I think we're trying to be more systematic about what are the tooling, what are the specifications, what are the standards?

Also, what's some training we can do? What are ways to help the individual open source projects, even outside the Linux Foundation, have better processes and be better supported in prioritizing security.

Gordon:  A lot of what I've heard about security at this event has been around supply chain security. Obviously, security covers a lot of stuff. It at least appears that your real focus initially is on this supply chain security.

Brian:  It's funny. I wanted to call it supply software NFTs, but I got shut down on that. Somebody told me earlier this week, we used to call SCM, Software Configuration Management. In fact, it was source code management tools, like GitHub or Git and Subversion. Others have long been about having that picture of where did software come from.

The metaphor of the supply chain, not only our supply chains hot because of the ships sitting off the Port of Long Beach. Also, this recognition that software does have a journey that we are building on top of open source components, so much more than other, than previously.

I remember 25 years ago, when I was first getting involved, you think about what the dependencies were in Apache httpd, it was like glibc. Whatever the operating system provided, it was pretty minimal.

These days, open source packages will have thousands of dependencies. Partly, because developers who push to npm and PyPI in places or do tiny packages around 10 lines of code. You aggregate all these together, and it ends up being much harder to audit, much harder to know if you're using updated versions of than usual.

The framing of a supply chain seemed to better crystallize the fact that that is a whole lot of different parties that touch this stuff. It also helps characterize that this is an issue that industry cares about, that is global in nature, and which governments are starting to care about now too.

I'd say one of the big galvanizers for getting this to be a funded initiative was the White House executive order back in May. Calling for the technology industry, not just open source, but the technology industry to get better about supply chain security as to address the kinds of vulnerabilities exploited in the hack of SolarWinds, and other major famous breaches in the last few years.

Gordon:  I like your reaction to how well understood this problem is. I've seen numbers that were all over the place. Linux Foundation has some numbers to indicate that wiith the executive order, maybe things weren't so bad in awareness point of view. However, Red Hat runs this Global Tech Outlook survey, for a few years. We asked about funding priorities for security, third party supply chain basically was the bottom of the barrel at 10 percent.

What's your reaction? What are we seeing here?

Brian:  Security is so hard to price. You ask somebody, "Do you want to use secure software?" Nobody says no. What objective metric do we have to know for secure enough? Other than, "Have you been hacked recently?" "Are you aware that you've been hacked recently?"

If your answer is, "I've not been hacked." Your answer is probably...You're not aware that you've been hacked. We really are lousy at coming up with objective ways to say we've hit a score when it comes to the security, the software, or the risk around a breach, or that kind of thing. We do know for sure when we lack a feature that isn't generating revenue for us.

In product roadmaps, whether we're talking about commercial software, even open source software, feature work tends to win out over paying off technical debt when tends to win out over updating dependencies tends to...

It's a shame even though people say they prioritize this, it's hard to do that. One of the things I've been thinking about as I've dived into this is how do we get security to be not like a checklist, not like burdensome bureaucratic kind of thing that developers feel they have to follow, but instead to have a set of carrots that would incentivize devs to add that extra added information, to update their dependencies more often, to make it easier for their own end users to update.

There's some software packages that make updates smooth that respect: Legacy APIs that don't change things a lot. There's others where every minor point release ends up being a rather disruptive update.

One of those things that might be imagined is if cloud providers...Well, first off through the work that communities are doing through SLSA and Sigstore through some of the other specifications work.

We'll get to the point where you'll be able to start to generate relative, perhaps even in some absolute metric, the integrity, the trustworthiness, and the risk profile of one tarball versus another, or one collection of deployed software versus another, one system image versus another.

I think cloud providers might be in a position to start to say, "Hey, we will charge less money to run this image if it has a lower risk associated with it, if there's better attestation around the software, if it's less a collection of one offs done by people in their spare time and more something that's been vetted, something that's been reviewed.

"Something that is pretty well established with minor improvements or something versus this other thing." If we can get incentives by the cloud host to charge less for that, a far off future might even be insurance companies. How do we manage risk out there in the real world?

It tends to be by buying insurance to cover our costs if our car dies on us or we have a health scare or something like that. The pricing of premiums in insurance markets is one way to influence certain behaviors. It's one reason people stopped smoking because their health insurance premiums went up if they kept smoking.

Is there a way to make [laughs] tolerating vulnerabilities in software like smoking where you can do it, but it's going to get expensive for you? Instead, if you just updated that underlying version, your cost would come down. Maybe this is a path to getting that to matter more in people's roadmaps.

Gordon:  I guess another way to ask same thing, is this a matter of people need to do this, but it's going to be expensive? They're going to have spent a lot of money to do this. Or is it they need to do it, but it doesn't necessarily need to be that onerous?

Brian:  Signing your releases, having a picture of when dependencies that you depend upon are vulnerable. It might be worth updating them. There's a whole batch of activities we can do to make the development tools and the way stuff gets deployed embody these specifications and these principles out of the gate so that stuff automatically happens. The right thing automatically happens.

There's improvements we could make in the standard software dev tools out there. Maybe even in places like GitHub, GitLab, and that kind of thing to make the cost of adopting these things really low for developers. Make it the due default. Make it the norm in the same way that accessing a TLS website today is the norm. It's almost unusual to go to one without a TLS certificate.

You'll get warned away now in current versions of Chrome. We've got to do that at the same time as recreate incentives to do the things where there's unavoidably a cost. When you update an underlying dependency, it's almost never zero cost. What's a reason to do that? There have to be a series of carrots, as you call it, and hopefully very few sticks.

Finally, though, given the government interest in this domain, you will start to see executive orders like we saw even today, I believe it was, or yesterday. There was a White House executive order telling all the federal agencies they got to update the firmware on routers and deal with this specific set of outstanding known vulnerabilities or shut their systems down.

That's ultimately what you have to do when you're running a old code that's unsafe. We might also start to see regulated industries like finance or insurance. The regulators might start to say, "Hey, if you're running code that hasn't been updated in five years, you are a clear and present danger to everyone else in the ecosystem. Shape up or ship out."

It'll be interesting to see if this starts to be embedded in the systems of the world that way.

Gordon:  Supply chain is your initial focus here. If you're looking a little further out, what are some of the other problems? Where might be the next two or three problem areas in wall attack?

Brian:  I've heard tons of stories about this recently. I think it's a pretty well accepted trope that I don't have like a metrics on this. Application security and secure software development are not really taught in computer science courses whether they were talking about the current education schools or the code academies, code camps, or even other spaces. We don't teach a lot of this stuff.

Of course, we're already inside of OpenSSF, but how do we get that into the standard CS curriculum, all the code academies, and those kinds of things? It's an important thing to do and figure out how to reward and recognize people for accomplishing that.

Gordon:  Anything else you'd like to share with our listeners?

Brian:  The Open Source Security Foundation is still pretty young. There's still lots of different touchpoints, lots of different things that we're either working on or thinking about working on. We now have some resources to go and apply to different domains.

If you are interested in this domain, if you've got a project you think is worthy of bringing to OpenSSF, or if you care about this for your own open source project, please come to openssf.org. Please come engage in the working group. Working groups are the primary unit of work [laughs] in our community.

We'd love to have you in no matter what level you're at in terms of expertise. We really want to help the entirety of the open source ecosystem uplevel in this space. All are welcome.




Thursday, August 05, 2021

Integration testing and Testcontainers with Richard North

 

Richard North is the creator of the popular open source integration testing library, Testcontainers, and former chief engineer at Deloitte Digital. I caught up with Richard shortly after the company he co-founded and of which he is CTO, AtomicJar, emerged from stealth with a $4M seed funding round led by boldstart ventures.

Although AtomicJar has not yet announced a product, Richard said on the podcast that it will be a SaaS product that extends and complements Testcontainers. Testcontainers is a Java library that supports JUnit tests, providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in an OCI/Docker container.

In this podcast, we discuss some of the challenges associated with integration testing and how Testcontainers came into being as an open source project to address some of the key pain points.

Listen to the podcast [MP3 - 10:34]

Tuesday, June 22, 2021

Using open source to help the community and drive engagement at Mux

In this podcast, Mux co-founders Steve Heffernan and Matt McClure talk open source strategy and making contributors feel rewarded. They also delve into why video is so hard and how a community working on it helps to solve the hard problems.

  • Demuxed 2021 conference for video engineers

Listen to the podcast [23:11 - MP3]

Wednesday, June 02, 2021

OpenSLO with Ian Bartholomew

 In this podcast, I speak with Ian Bartholomew of Nobl9 about the release of OpenSLO as an open source project under the Apache 2 (APLv2) license. The company describes OpenSLO as the industry’s first standard SLO (service level objective) specification.

In this podcast, we discuss:

  • What site reliability engineering is and how it relates to more traditional sysadmins
  • Trends in observability
  • Why the company decided to make the spec open source
  • How the project thinks about success

Access OpenSLO on GitHub

Listen to the podcast - MP3 [14:25]

Friday, May 14, 2021

A far ranging AI discussion with Irving Wladawsky-Berger

I've known Irving since his days at IBM running Linux strategy. Since he "retired," he's been busy with many things, including a number of roles at MIT, where I've kept touch with him including through the MIT Sloan CIO Symposium. We were emailing back and forth last week and discovered that that we've been on something of a similar wavelength with respect to AI. Irving just wrote a blog post "Will AI Ever Be Smarter Than a Baby?" which delved into some of the same topics and concerns that I covered in a presentation at devconf.cz earlier this year.

In this discussion, we explored the question of the nature of intelligence, the answer to which seems to go well beyond what is covered by deep learning (which to put it way too simplistically is in some respects a 1980s technique enabled by modern hardware). 

Among the topics that we explore in this podcast:

The two notions of intelligence. Classifying/recognizing/predicting data and explaining/understanding/modeling the world, which is complementary but potentially much more powerful. 

Whether we need to bring in a stronger element of human cognition (or really even learning/problem solving as we see in the animal kingdom) to take the next steps? And the related work in cognitive science by researchers like Alison Gopnik at Berkeley and Josh Tenenbaum at MIT.

Have we been seduced by great but bounded progress? Can we get to Level 5 autonomous driving?

What will the next 10 years look like?

Listen to the podcast - MP3 [40:29]

Thursday, April 01, 2021

Metrics with Martin Mao of Chronosphere

Martin Mao is co-founder and CEO of Chronosphere, a company that offers a hosted SaaS monitoring service. 

In this podcast, he discusses:

  • The observability landscape
  • The rise of Prometheus
  • The role of open source
  • What happens when instrumentation is built into everything cloud-native in a standardized way

In an earlier podcast, I spoke with Martin about the challenges of open sourcing an internal company project,

Listen to podcast [MP3 - 21:19]

Monday, March 22, 2021

Render, PaaS, and open source with Anurag Goel

Anurag Goel, an early Stripe employee, co-founded Render which puts a PaaS layer on top of Kubernetes. In this interview, we talk about: 

  • How things are different for PaaS than in its v1 days
  • How he thinks about what being "opinionated" means in the context of PaaSs
  • The importance of user experience
  • The benefits for everyone of open sourcing software components

Listen to the MP3 [25:56]

Sunday, October 25, 2020

80 authors, 80 science fiction stories shorter than novels

This is an update to 50 authors, 50 science fiction stories shorter than novels. It's mostly an addition of 30 more authors (many from outside the "big SF names") but I also made some other tweaks.

 I've been playing around with re-reading favorite science fiction short stories while also seeing if there's anything that has managed to elude me over the years. I've also tried to catch up with more recent authors given that I don't read as many books as I used to. (The market for science fiction short stories, in general, is in something of a decline but there are still some fabulous recent stories out there.)

A few rules I set for myself:
  • One story per author. Certainly there are many on this list for whom I could effortlessly reel off multiple deserving entries. A side effect of this rule is that it brought a fair bit of organic diversity to the list which would otherwise have been more heavily populated by a handful of favorite authors.
  • Nothing is novel length but you'll find plenty of novelettes and novellas alongside the short stories. That said, I've tried to favor shorter works and stories that stand well on their own.
  • I didn't include authors whose novels I enjoy but just don't have any short stories I'm aware of that grabbed me though I've snuck in recognizing overall achievement where there was at least a reasonable hook. 
  • Everything here can be plausibly categorized as science fiction although some hew closer to traditional sf tropes than others which are at least adjacent to horror, fantasy, and philosophical fiction.
  • Be interesting! Provoke conversation!
I used various lists and recommendations to tickle my memory, ferret out works I might not be familiar with, and point me to highlights in collections. But I've read everything here and don't think I've included anything just because it's "expected" even if I was coaxed into one or two. They're (almost) all stories I like even if I did aim for overall variety of time period, style, and subject matter. In particular, I wanted to bring in some fresher or lesser known voices rather than convincing myself that all older award-winning stories stand up well today. 

Many of these stories can be found online and others in popular "best of" collections. Others are probably harder to come by but I don't think there's anything here that is really hard to find. The Internet Speculative Fiction Database is a good starting point if you're trying to identify collections where a story appears.

Anyway, my list (sorted chronologically) is below. While it's one story per author I do include pointers to other works in the comments. I've been careful to err on the side of avoiding spoilers but if you want to go into the stories completely cold, by all means, just work from the list of titles and authors.

"The Mortal Immortal" Mary Shelley (1833)

You may have heard of Mary Wollstonecraft Shelley. Something about a monster story conceived during a "wet, uncongenial summer" in 1816. Frankenstein; or, The Modern Prometheus, would be published in 1818 and has become a cultural touchstone. Shelley also published many other works in a variety of styles, including this story which concerns the challenges of being an immortal.

"Flatland" Edwin A. Abbott (1884)

This satirical "Romance of Many Dimensions" novella by the English schoolmaster Edwin Abbott Abbott was originally intended to use the fictional two-dimensional world of Flatland to comment on the hierarchy of Victorian culture. However, it became more broadly known when Einstein's General Theory of Relativity introduced the concept of multiple dimensions into popular discourse. 

"A Dream of Armageddon" H. G. Wells (1901)

This story comes from prolific Victorian writer Herbert George Wells. Wells wrote in a variety of genres including social commentary and history. However, he's most remembered for his science fiction novels like War of the Worlds (which was famously made into a radio play as well as films of varying levels of quality). This story used the fictional device of a man recounting dreams of a future time in which he is a political figure who has withdrawn from life to live with a younger woman on the island of Capri, even though this withdrawal is enabling opponents to start a great war with futuristic aircraft. Wells would later build on the concept for his novel The Shape of Things to Come which would itself serve as the loose basis for a film of the same name in 1936.

"With the Night Mail" Rudyard Kipling (1905)

Rudyard Kipling would win the Nobel Prize in Literature a couple years after this story was published. Unlike Wells, Kipling is not remembered for science fiction; rather he's mostly associated with stories and poetry often set in colonial India, where he was born and lived much of his early life. "With the Night Mail" is one of only two science fiction stories that Kipling wrote. Both concern the workings and activities of The Aerial Board of Control, a 21st century fictional organization that manages the dirigible and other air traffic for the whole world and acts as a de facto world government. 

"The Machine Stops" E. M. Forster (1909)

Rounding out this trio of Victorian English authors, Forster's reputation also comes in part from a novel, A Passage to India, that concerns the relationship between East and West as seen through the lens of India in the latter days of the British Raj--as well as other novels that primarily focused on class differences in England. Forster wrote that "'The Machine Stops' is a reaction to one of the earlier heavens of H. G. Wells." [The Eloi of The Time Machine.] In this novella, humanity lives underground and relies on a giant machine to provide its needs. There are technologies similar to video conferencing and the Internet. But the machine begins to fail and no one remembers how to fix it. This story is widely considered to be one of the most prescient short works of early science fiction.

"At the Mountains of Madness" H.P. Lovecraft (1936)

H.P. Lovecraft's primary reputation is as a horror writer. However, his stories that were part of the Cthulhu Mythos (such as this one) borrow premises often associated with science fiction such as alien invasion, other dimensions, and interference with human cultural and physiological evolution. For example, this novella popularized ancient astronaut theories as it details the events of a disastrous expedition to the Antarctic continent and what was found there by a group of explorers led by the narrator, Dr. William Dyer of Miskatonic University. 

"Microcosmic God" Theodore Sturgeon (1941)

Originally published in the magazine Astounding Science Fiction, this story is an early example of the use of the "pocket universe" concept in science fiction. A biochemist develops a synthetic life form which lives (and invents) far more quickly than humans can. This biochemist, Kidder, is their "god" who exerts his power over them and profits from them. Sturgeon is not generally very well known but this particular story regularly and deservedly appears in anthologies and best-of lists.

"The Library of Babel" Jorge Luis Borges (1941)

This short story, originally written in Spanish by Argentine author and librarian Jorge Luis Borges, is  more of a philosophical exploration than a science fiction story in the usual sense. It asks you to imagine a library whose books contain every possible combination of letters and numbers. Every great work is there, the account of your death is there, and every possible scientific discovery that can be written down exists in the library. But any true information would be buried in, and rendered indistinguishable from, all possible forms of false information and complete gibberish. 

"The Weapon Shop" A. E. van Vogt (1942)

This story was developed from a much shorter 1941 story, "The Seesaw." It was, in turn, used as the basis for a portion of the 1951 fix-up novel The Weapon Shops of Isher; another closely related novel published during the same period was The Weapon Makers. These books revolve around the immortal Robert Hedrock who once created the Weapon Shops as a force to counteract the imperial world government long dominant on earth. These two novels bring together van Vogt's style of hard science fiction and transcendent superheroes more than his other books. However, this short story may do an even better job on its own.

"Mimsy were the Borogoves" Lewis Padgett (1943)

Padgett was the pseudonym of spouses Henry Kuttner and C. L. Moore when they were writing together, often to humorous effect. This story title was inspired by a verse from “Jabberwocky,” a poem found in the classic novel Through the Looking-Glass by Lewis Carroll. It imagines mysterious educative toys timeslipped in from the future, which leads to the children discovering them to start thinking in new patterns and communicating in strange ways. "Twonky" and "The Time Locker" are other Padgett stories for which they're often remembered.

"Arena" Frederic Brown (1944)

Brown, who also wrote detective fiction, was noted for stories that were written with, what The Encyclopedia of Science Fiction calls "professional economies of effect." This short story is one of those. It tells of the settling of an interstellar war through single combat between a human and an alien. Either the story name or the plot summary may seem familiar to Star Trek fans; it was a popular episode during the first season of the classic series. Brown experimented with narrative styles. "The Sentry" is a micro-short which may seem more predictable today than when it was written. 

"A Logic Named Joe" Murray Leinster (1946)

This story, originally published under Leinster's real name (William F. Jenkins) contains an early description of a computer (called a "logic") in fiction. In the story, he imagined the Internet in many respects. He envisioned logics in every home, linked through a distributed system of servers (called "tanks"), to provide communications, entertainment, data access, and commerce; one character says that "logics are civilization."

"The Lottery" Shirley Jackson (1948)

Jackson was known for writing mystery and horror and this story strays as far as any on this list from what's usually considered science fiction. (This story was published in The New Yorker and few of her short works were in traditional science fiction magazines.) "Lottery in June, corn be heavy soon." To say much more is to probably give too much about this powerful short story away. To quote Martin Cahill: "If you asked anyone about a American short story that stuck with them for their entire lives, it would not shock me if they were to think for a moment, and then say, 'that one story, ‘The Lottery,’' followed up with some form of, “'that shit is fucked up.'” Many of her best stories appear in the collection The Lottery and Other Stories.

"The Little Black Bag" C. M. Kornbluth (1950)

One might say that this story shares a similar premise with "Mimsy were the Borogoves" in that an object, in this case, a doctor's little black bag, gets sent back in time. However, suffice it to say that this is not an equally playful story. Kornbluth's classic works are this story and its sequel "The Marching Morons." However, he was a prolific author and his other works include "Two Dooms" which is considered to be one of the better What if the Nazis Won WWII treatments.

"There Will Come Soft Rains" Ray Bradbury (1950)

Bradbury's work has a poetic and evocative style that can tend towards fantasy and horror as much as science fiction. Charlie Jane Anders, who appears later on this list, had this to say about this particular story: "But this one, in which basically a smart house keeps going after its human inhabitants are all dead, is in a special league in the knife-twisting sweepstakes. Like a lot of stories in the years following World War II, it's concerned with the threat of nuclear annihilation, but also with how our technology might outlive us." This story appears in The Martian Chronicles, which is Bradbury's greatest work and you should just read the whole thing even if you're not a particular Bradbury enthusiast. But if you're looking for a single more Martian-themed story in that collection of linked stories, give "The Third Expedition" a try.

"Pillar to Post" John Wyndham (1951)

A number of Wyndham's best known novels have been described as cozy catastrophe fiction, a trait shared with a number of British authors of the same general period of which Wyndham was probably the best practitioner. In Wyndham's case, it's books like Day of the TriffidsThe Chrysalids (Re-Birth), and The Midwich Cuckoos--which have never had the film treatment that they deserve. However, in The Seeds of Time in particular, Wyndham also collected short stories which often dealt with time travel and other forms of displacement. The best may be this story which Wyndham described as "written to suit, I hoped, the policy of a newly arisen American magazine." A more whimsical tale in the same book is "Pawley's Peepholes," written the same year.

"Surface Tension" James Blish (1952)

During the 1950s, Blish wrote his Okie stories which envision a time when earthbound cities take to migrant wandering among the stars. Epic in scope, these stories were eventually wrangled into a single volume, Cities in Flight. During the same period, he was writing short stories of which "Surface Tension" is my favorite. The story concerns the events that follow after a spaceship crew genetically engineering their descendants into something that can survive after the ship crashlanded on a planet with only puddles of water. The miniaturized descendants must overcome surface tension if they're to get beyond their single puddle. Other well-regarded Blish short stories include "Common Time."

"It's a Good Life" Jerome Bixby (1953)

This story, more horror than science fiction, was the basis of a particularly good episode of the original Twilight Zone; it also has led to various remakes and riffs that played off of the idea including an episode of The Simpsons. At the center of the story is Anthony Fremont, a three-year-old boy with near-godlike powers; he can transform other people or objects into anything he wishes, think new things into being, teleport himself and others where he wants, read the minds of people and animals, and even revive the dead. As Anthony can read minds, the town's population must not only act content with the situation when near him, but also think they are happy at all times. 

"Fondly Fahrenheit" Alfred Bester (1954)

This is a breathless story of a man and his android whose personalities intermesh to become two aspects of a single insane murderous personality. Author Robert Silverberg has described it as a "paragon of story construction and exuberant style" which is what really makes it stand out on this list. Other Bester short stories of particular note are "Adam and No Eve" and "The Men Who Murdered Mohammed."

"The Cold Equations" Tom Godwin (1954)

James Davis Nicoll writes: "Science fiction celebrates all manner of things; one of them is what some people might call 'making hard decisions' and other people call 'needless cruelty driven by contrived and arbitrary worldbuilding chosen to facilitate facile philosophical positions.'” Which pretty describes this story. I'm making something of an exception by including in that it's contrived and I don't much like it. But it's been anthologized so often it's really part of the canon, so here it is. 

"The Star" Arthur C. Clarke (1955)

Clarke is one of the authors on this list--two others, Asimov and Heinlein, are coming up shortly--who produced such an astonishing volume of quality short stories that picking just one is really impossible. But here we are. I give "The Star" the nod because there's deeper emotion than in much of Clarke's work. It's hard to say too much about this story without giving things away. Let's just say it concerns an expedition that has discovered the remnants of an advanced civilization destroyed when its star went supernova. Other favorite Clarke shorts include "The Nine Billion Names of God," "The Wall of Darkness," "Rescue Party," and "Meeting with Medusa." I might also throw in "The Sentinel" but mostly because it was the inspiration for the film 2001: A Space Odyssey.

"Fiat Homo" Walter M. Miller Jr. (1955)

This one is a bit of a cheat. In the early 1950s, Miller published a series of short stories which would shortly be combined and reworked into A Canticle for Leibowitiz, his only novel and the work for which he is known. A Canticle for Leibowitz is arguably the best post-apocalyptic work of science fiction and differs significantly from the more common "Mad Max" model. The novel is set in a sort of Dark Ages hundreds of years after a holocaust and tells its story from the point of view of monks preserving knowledge as in the previous Dark Ages. The reworked "Fiat Homo" makes up the first part of Canticle and is arguably the strongest but, in practice, it's the full novel that you'll read. 

"Allamagoosa" Eric Frank Russell (1955)

This anti-bureaucratic satire meets shaggy dog story, which won the Hugo prize, was one of a number of quirky short stories that Russell wrote which were good examples of science fiction humor of the period. He also wrote a variety of other satires and anti-war stories in the immediate aftermath of WWII.

"Time in Advance" William Tenn (1956)

While not especially well-known (he never won a science fiction writing award), Tenn, the pseudonym of Philip Klass, wrote sharp-witted stories, often with a darkly comic side. This story, written fairly near the end of his active writing career, is set in a far future where a law has been passed enabling citizens to serve out sentences for crimes they intend to commit, serving the full term, but with a 50% pre-criminal discount. When two such pre-criminals return to earth intending to carry out the crimes they've earned, things don't go as planned. "The Brooklyn Project" is another story worth your time and was pretty much a toss-up with this one.

"The Last Question" Isaac Asimov (1956)

Asimov himself suggested that this was one of his favorite short stories. "I got the idea all at once and didn't have to fiddle with it; and I wrote it in white-heat and scarcely had to change a word. This sort of thing endears any story to any writer," he once wrote. It really is almost a perfect idea-based science fiction short story. It deals with the development of a series of computers called Multivac and their relationships with humanity through the courses of seven historic settings, beginning on the day in 2061 when Earth becomes a planetary civilization. In each of the first six scenes a different character presents the computer with the same question; namely, how the threat to human existence posed by the heat death of the universe can be averted. It's tight, it's funny, and it's got a killer ending. His novella "Nightfall" is also wonderful (the expanded novel version less so, as is often the case) but I think this short story gets the nod. One or more of his Robot stories could make their way onto this list as well but I think of them as more of a group achievement award than individual standouts.

"All You Zombies" Robert Heinlein (1959)

This story involves time travel paradoxes and further explores themes introduced in Heinlein's earlier "By His Bootstraps." In his analysis of the story, Davi Ramos writes that "Individualism and the free expression of love and sexuality are among its main themes. The work analyzed here pushes the boundaries of causality to mind-bending extremes, in order to discuss the boundaries of sexuality and the logic of identity." The narrative flow is complex and you probably won't totally get it on your own the first time around. The 2014 film Predestination is based on this story. Some other short stories I'd put on the list are "The Unpleasant Profession of Jonathan Hoag" and many of the stories collected in The Past Through Tomorrow future history including "Requiem" and the prequel to his Lazarus Long immortality saga Time Enough for Love, "Methuselah's Children."

"Flowers for Algernon" Daniel Keyes (1959)

Algernon is a laboratory mouse who has undergone surgery to increase his intelligence. The story is told by a series of progress reports written by Charlie Gordon, the first human subject for the surgery, and it touches on ethical and moral themes such as the treatment of the mentally disabled. The short story was later expanded into a novel and made into the film Charly (for which Cliff Robertson won the Oscar for Best Actor) and many other adaptations but I think the original story is still the most effective. 

"Chronopolis" J. G. Ballard (1960)

Ballard probably became best known (whether people knew his name or not) for his war novel, Empire of the Sun, a semi-autobiographical account of a young British boy's experiences in Shanghai during Japanese occupation, because Stephen Spielberg later made into a film. But he also had a career as a science fiction author which spanned the "world destroying" genre (The Drowned WorldThe Wind from Nowhere), like other British authors such as John Wyndham, to experimental efforts some of which were adapted into films such as David Cronenberg's Crash. He also wrote some enticing short stories. One of them is "Chronopolis" which begins with a man in prison, Newman, and proceeds to examine his fascination with the concept of time in a world where clocks have been prohibited and are regulated by time police. Other Ballard shorts that aren't too experimental include "The Drowned Giant," "The Overloaded Man," and perhaps the most conventionally SF "Thirteen for Centaurus."


"The Lady Who Sailed the Soul" Cordwainer Smith (1960)

The nom de plume of Paul Myron Anthony Linebarger, Smith (not to be confused with E. E. Doc Smith of Lensman fame) wrote relatively little, in part because he held jobs that required extensive foreign travel. Most of his science fiction was in the Instrumentality Universe and author Frederik Pohl described his stories as "a wonderful and inimitable blend of a strange, raucous poetry and a detailed technological scene, we begin to read of human beings in worlds so far from our own in space in time that they were no longer quite Earth." This story is one of his more lyrical works. It's adjacent in timeline to "Scanners Live in Vain" which is another Smith story written ten years earlier near the beginning of his science fiction writing. While Scanners is often cited as Smith's best work, I prefer the more modern and poetic feel of this story.

"Harrison Bergeron" Kurt Vonnegut Jr. (1961)

Although Vonnegut was a mainstream author, whatever that means exactly, many of his novels contained fantastical or science fictional elements whether the Ice Nine of Cat's Cradle, the time displacement of Slaughterhouse Five, or just about anything related to The Sirens of Titan. The premise of this story is that amendments to the Constitution dictate that all Americans are fully equal and not allowed to be smarter, better-looking, or more physically able than anyone else. The Handicapper General's agents enforce the equality laws, forcing citizens to wear "handicaps" such as masks for those who are too beautiful, loud radios inside the ears of intelligent people that disrupt thoughts, and heavy weights for the strong or athletic.

"A Rose for Ecclesiastes" Roger Zelazny (1963)

This is another one of those stories that the reader is probably best discovering on their own. It involves the Biblical book of Ecclesiastes, language, Mars, and prophesy and I should probably leave it at that. This story was an early career work for Zelazny who would go on to write many novels, most notably Lord of Light and his Amber series.

"'Repent, Harlequin!' Said the Ticktockman" Harlan Ellison (1965)

I had the pleasure of seeing Ellison speak a couple of times when I was an undergrad and he was simultaneously both entertaining and thought-provoking. If I'm being honest though, I've tended to admire rather than love many of his short stories. That said, "'Repent, Harlequin!' Said the Ticktockman" opens with a passage from Civil Disobedience by Henry David Thoreau and is a satirical look at a dystopian future in which time is strictly regulated and everyone must do everything according to an extremely precise time schedule. And breaking the law has severe consequences. Another perpetual Ellison favorite is "I Have No Mouth, and I Must Scream."

"The Stone Place" Fred Saberhagen (1965)

Saberhagen wrote a variety of science fiction and fantasy but he became best known for his Berserker series of stories and novels. Berserkers are interstellar killing machines, programmed to eliminate all forms of life, that presumably destroyed their creators in a long-ago war. This is another one of those cases where picking out individual stories is at least somewhat challenging but this one is a good place to start that doesn't really require additional context. "What T and I Did" and "Masque of the Red Shift" are additional stories of particular note, also from his initial Berserker collection.

"Light of Other Days" Bob Shaw (1966)

With Shaw we return to a writer, who in spite of a body of work, is mostly known for a single story and, to a lesser degree, its sequel "Burden of Proof" and an associated novel, Other Days, Other Eyes. The melancholy "Light of Other Days" builds on the idea of "slow glass," glass through which light takes years to pass, to explore the preservation of the past into the present.

"We Can Remember it for you Wholesale" Philip K. Dick (1966)

Dick's fiction explored varied philosophical and social themes, and featured recurrent elements such as alternate realities, simulacra, monopolistic corporations, drug abuse, authoritarian governments, and altered states of consciousness. He was prolific but a number of his works were arguably more successful as screen adaptations than in their original print form. This story about memory implants and the nature of reality stands well by itself but also was the basis for the 1990 film Total Recall directed by Paul Verhoeven and starring Arnold Schwarzenegger--which is probably best interpreted in light of its original source material. Novels The Man in the High Castle (adapted as an Amazon series) and Through a Scanner Darkly (made into a generally less successful 2006 rotoscoped movie) are also particularly recommended.

"Neutron Star" Larry Niven (1966)

Niven is best known for his hard-science Known Space series populated by imaginatively-drawn strange aliens and exotic technologies in a far-future. This story is the first to feature Beowulf Shaeffer, the ex-pilot and reluctant hero of many stories. It also marked the first appearance of the nearly indestructible General Products starship hull, as well as its creators, the Pierson's Puppeteers, a cowardly but advanced race that have an appearance somewhat in common with centaurs. Shaeffer's mission is to explore a neutron star and things go wrong; it turns out the science in the story in the story isn't quite right but it's still a great example of Niven's work. Niven's shorts tend to be stronger than his solo-authored novels. Many of the stories in the known space series are good reads, including those in the collection named after this story, Neutron Star. Outside the Known Space universe, "Inconstant Moon" is arguably his best if just picking stories in isolation.

"The Fifth Sally or The Mischief of King Balerion" Stanislaw Lem (1967)

The widely-translated Lem is the author of Solaris, which has been filmed three times, including as a famous Soviet-era art film co-written and directed by Andrei Tarkovsky in 1972. In addition to essays, he also wrote often humorous and satirical science fiction; he's sometimes said to have worked in the same vein as Swift and Voltaire. The Cyberiad brings together independent--but largely linked through the characters of the brilliant robotic engineers Trurl and Klapaucius--stories that take place in a pseudo-medieval world. You'll really want to read the whole book but you can start with this story or perhaps the longer "Tale of the Three Storytelling Machines of the Genius."

"Aye, and Gomorrah" Samuel Delany (1967)

This first appeared as the final story in Harlan Ellison's seminal 1967 anthology, Dangerous Visions. It was controversial because of its disturbing sexual subject matter although, like many of the stories in that anthology, stories that pushed the envelope in 1967 don't necessarily do so today. The narrative involves a world where astronauts, known as Spacers, are neutered before puberty to avoid the effects of space radiation on gametes. Aside from making them sterile, the neutering also prevents puberty from occurring and results in androgynous adults whose birth-sex is unclear to others. Spacers are fetishized by a subculture of "frelks" attracted by the Spacers' lack of arrousability.

"Kyrie" Poul Anderson (1968)

A love story involving a human woman, a luminous energy being, and a black hole might not sound too appealing at first glance but this is probably my favorite Anderson short of which there are many. (Along with novels of which the immortality tale The Boat of a Million Years tops my list.) Among shorter works, the novelette "Goat Song" and "The Saturn Game" novella are also especially worth checking out.

Anthony Burgess "The Muse" (1968)

A literary critic, mostly comic writer, musician, and linguist, Burgess is probably best known for his dystopian satire A Clockwork Orange which Stanley Kubrick made into a controversial but critically acclaimed film. This time-travel-with-a-twist story is both a shorter and a smaller work but, if you can track down a copy, is well-worth the quick read.

"Weihnachtsabend" Keith Roberts (1972)

Generally regarded as Roberts' single best story, this depicts an alternate world in which the Nazis won WWII and expands upon some the savage German myths implicit in that victory. Roberts is also known for Pavane, a fix-up novel of linked stories imagining a victorious Spanish Armada that results in a technologically backward Catholic England.

"Alien Stones" Gene Wolfe (1972)

Concerning Wolfe, The Encyclopedia of Science Fiction uses phrases such as "he created texts which–almost uniquely–marry Modernism and Genre SF" and "Wolfe remains quite possibly sf's most important author qua author." Which, is to say, he can be more than a bit challenging to read. I found this first contact story to be one of his more accessible works for reasons both of content and length. The collection where it appears, The Island of Doctor Death and Other Stories and Other Stories, is a good Wolfe starting point in general. His novella, The Fifth Head of Cerberus, which appears in a collection of the same name, also has many fans but it's intricately linked with the other two novellas in that collection so isn't really a standalone work.

"Eurema's Dam" R. A. Lafferty (1972)

Lafferty was always something of an enigma who didn't fit neatly into science fiction or fantasy categories. A "teller of tall tales" was one valid lens through which to view him although there were others including that of conservative Catholicism. That said, he won a Hugo for this story which editor Robert Silverberg described as a "story about a schemiel of an extraordinary sort" but which Lafferty himself apparently didn't especially favor. Other notable stories including "In our Block," "Narrow Valley," and "Thus We Frustrate Charlemagne" appear in the long out-of-print Nine Hundred Grandmothers; a new Lafferty collection is supposedly appearing in 2021.

"The Ones Who Walk Away from Omelas" Ursula Le Guin (1973)

Le Guin wrote over a course of almost 60 years to a variety of audiences in a variety of genres. This story is essentially philosophical fiction that explores a summer festival in the utopian city of Omelas, whose prosperity depends on the perpetual misery of a single child. Another, longer, Le Guin story "Buffalo Gals, Won't You Come Out Tonight" recounts a human girl's meeting with incarnations of Native American spirit animals and appears in an eponymous collection. Other LeGuin works such as "Vaster than Empires and More Slow" and the novel The Word for World Is Forest examine the relationship between humans and their natural environment. "The Day Before the Revolution" is considered a short story prologue to The Dispossessed, which is my favorite novel of hers; it's about an anarchist society. 

"He Fell into a Black Hole" Jerry Pournelle (1973)

Jerry Pournelle was a writer of typically military science fiction, essayist on science topics, editor, and creator of essentially a proto-blog in Byte magazine. This story is set in Pournelle's CoDominium universe where the US and the USSR have formed a de facto planetary government and then interstellar empire. Pournelle also co-authored a particularly noteworthy hard science novel with collaborator Larry Niven--A Mote in God's Eye. Shorts "Motelight" and "Reflex" were carved out of that book prior to publication.

"Where Late the Sweet Birds Sang" Kate Wilhelm (1974)

This story is the first of three linked stories later published as a novel of the same name in 1976. It takes  place as civilization is collapsing; an isolated group prepares for this largely infertile post-collapse world by undertaking a cloning program. Wilhelm wrote novels and stories in the science fiction, mystery, and suspense genres over the course of a long career. Another story especially worth checking out is "The Girl Who Fell into the Sky," which is more supernatural than SF but did win a Nebula for best novelette.

"Catch that Zeppelin" Fritz Leiber (1975) 

This is a clever alternate history take that imagines a more decisive defeat of Germany at the end of the First World War. Although more fantasy than science fiction, the novelette "Gonna Roll the Bones" in Ellison's Dangerous Visions anthology also won Hugo and Nebula awards.

"Houston, Houston, Do You Read" James Tiptree Jr. (1976)

This novella is another one of those stories where just about any description would shortchange readers wanting to experience it without knowing what it's really about and where it's headed. Tiptree was the pen name used by Alice Bradley Sheldon from 1967; her real identity wasn't widely-known but was generally assumed to be that of a man until it leaked out shortly after this story was written. (Although there was speculation, based partially on the themes in her stories including this one, that Tiptree might be female.) In any case, this is an interesting story that won both Hugo and Nebula prizes, even if the gender and social stereotypes seem dated for the time it was written.

"Ender's Game" Orson Scott Card (1977)

This story gave birth to a series of novels, starting with a novelization of "Ender's Game" and leading to a series of novels. While the first sequel, Speaker for the Dead, was quite excellent in its own right the initial short story shines brightest in my opinion. It begins with Ender being made the commander of Dragon Army at Battle School, an institution designed to make young children into military commanders against an unspecified enemy. Armies are groups of students that fight mock battles in the null gravity Battle Room. Due to Ender's genius in leadership, Dragon Army dominates the competition. Go ahead and read the story. (But don't bother with the film.)

"The Guy with the Eyes" Spider Robinson (1977)

In general, I've tried to pick standalone stories for this list. This one works fine in that vein but it's really here as the first entry in the linked stories that make up Callahan's Crosstime Saloon stories; they've been collected into a number of books beginning with Callahan's Crosstime Saloon. The stories are filled with strange or unusual events and visitors and are an obvious homage to Fletcher Pratt and L. Sprague de Camp's Tales from Gavagan's Bar and Arthur C. Clarke's Tales from the White Hart.

"Sandkings" George R. R. Martin (1979)

Martin is, of course, best known for his series of epic fantasy novels A Song of Ice and Fire, which were adapted into the HBO series Game of Thrones. However, before this breakout, he had written a variety of stories that mixed fantasy, horror, and science fiction of which "Sandkings" is a clear example. It certainly includes sensibilities that will be familiar to Game of Thrones viewers. Suffice it to say that Martin was supposedly inspired by a college friend at Northwestern University who had a piranha tank and would sometimes throw goldfish into it between horror film screenings. (This explains a great deal about Martin.)

"Riverworld" Philip Jose Farmer (1979) 

This shorter piece can serve as a good introduction to Farmer's multi-volume Riverworld series which is set on a planet where a godlike race has resurrected the whole of humanity along the banks of a river that sinuously circles around and around a world. Another well-known Farmer work is "The Lovers" but, while originally a novella, is usually read in the expanded novel form. His novella "Riders of the Purple Wage" imagines a society with basic income and total sexual freedom.

"Grotto of the Dancing Deer" Clifford D. Simak (1980)

This was one of Simak's last short stories and won Hugo, Nebula, and Locus awards; it shows off his skills at their strongest.  It's a archaeo-sci-fi tale that's brilliantly-written, moving, and slowly pulls you into something of a mystery.

"The Gernsback Continuum" William Gibson (1981)

This is probably an unconventional choice. "Burning Chrome" or "Johnny Mnemonic," associated with his Sprawl trilogy, would probably be more more conventional picks for Gibson short stories. But I think this story is actually more central to cyberpunk. As Bruce Sterling writes "'The Gernsback Continuum' shows [Gibson] consciously drawing a bead on the shambling figure of the SF tradition. It's a devastating refutation of 'scientifiction' in its guise as narrow technolatry." It takes down the glimmering science fictional cities in which zeppelins rule the airways.

"True Names" Vernor Vinge (1981)

This novella doesn't have the gritty ambiance but it's still a seminal work of the cyberpunk genre. In 2001, The New York Times declared that Vinge's depiction of "a world rife with pseudonymous characters and other elements of online life that now seem almost ho-hum" had been "prophetic." I'd add that we don't use the true name term, which is to say the tying of someone's online identity to their identity in the real world, as much as we should; it's a great concept. He subsequently wrote novels that explored a future libertarian society and the impact of a technology which can create impenetrable force fields called 'bobbles' but is probably best known for his 1992 novel A Fire Upon the Deep and, to a lesser degree, its sequels. A newer story that deals with a similar topic to "True Names" updated is "Fast Times at Fairmont High," which can be viewed as a sort of pilot for his Rainbows End novel.

"The River Styx Runs Upstream" Dan Simmons (1982)

Although his Hyperion Cantos series of (primarily) novels are science fiction, much of Simmons' output is horror and this story certainly bridges the genres. This was the story Simmons published after attracting the attention of Harlan Ellison at a writing workshop. It imagines a world in which "Resurrectionists" can technologically revive family members but, while the resurrected can function somewhat autonomously, they no longer have higher brain functions. Many of Simmons' short stories are in the collection Prayers to Broken Stones, which also includes "Remembering Siri," originally a part of the aforementioned Hyperion

Greg Bear "Blood Music" (1983)

Originally a novelette, later expanded into a (less good) novel, this story combines "man's creation gone wrong" with an evolutionary transcendence theme. Whether or not this story kicked off nanotechnology as a subject for science fiction as is sometimes claimed, it was certainly an early and prominent example.

"Press Enter ▮" John Varley (1984)

Written a few years after Vinge's "True Names," this is a very different story but concerns some of the same themes related to developing connected computer networks. This story has been the more reprinted of the two stories, perhaps because of its harder edge. "The Persistence of Vision" is another Varley story that won most of the major science fiction awards. 

"Bloodchild" Octavia Butler (1984)

Set on an alien planet, this story depicts the complex relationship between human refugees and the non-humanoid aliens who keep them in a preserve to protect them. Themes of co-existence and breeding appear throughout much of Butler's writing, including the Xenogenesis trilogy that came out during the five years following this story.

"The Crystal Spheres" David Brin (1984)

The Fermi paradox, named after physicist Enrico Fermi, refers to the apparent contradiction between the lack of evidence for extraterrestrial civilizations and the many estimates that suggest such civilizations should be commonplace in a very large and old galaxy and universe. This story postulates a solution, the starting point of which is that every habitable solar system is surrounded by a "crystal sphere" which can only be broken from the inside--once a civilization has the technology to do so. Another good Brin short story is "The Giving Plague" but I figured there were already enough stories with a plague component on this list given the times.

"The Road Not Taken" Harry Turtledove (1985)

Turtledove has been a prolific alternative history author although it's mostly hard to pick out standout individual stories. However, this one is as good a choice as any even if the inspiration for the title, a Robert Frost poem, is mostly misinterpreted as in this case. It's the prequel to "Herbig-Haro," published the year before. Guns of the South is a notable alternate history Civil War novel by Turtledove that I recommend in part because it's standalone while many of his series go on through many books. 

"Schrödinger's Kitten" George Alec Effinger (1988)

Partially sharing the same setting as a number of his novels, this novelette follows a Middle-Eastern woman, Jehan Fatima Ashûfi, through various realities. She perceives these realities as "visions" and assumes they might come to her from Allah. The realities correspond to a form of the many worlds interpretation of quantum mechanics, and the story's title comes from the Schrödinger's cat thought experiment.

"Many Mansions" Alexander Jablokov (1988) 

This funny inventive tale imagines that, given that religion is the opiate of the masses according to Karl Marx, why wouldn't one smuggle it? I went to grad school with Alex and we were briefly housemates. Another particularly recommended short story is "At the Cross-Time Jaunter's Ball" along with other stories mostly collected in The Breath of Suspension.

"Axiomatic" Greg Egan (1990)

In this story, which appears in a collection of the same name, the protagonist enters a store selling mods not only for every variety of psychedelic experiences, but for altering one's personality traits, sexual orientation, and even religion. The protagonist seeks a custom-made mod that will suspend his moral convictions long enough for him to murder his wife's killer. All the stories in the collection delve into different aspects of self and identity.

"Even the Queen" Connie Willis (1992)

This short story falls squarely on the comic side of Willis' production. The Village Voice called it "a comedy of identity politics and mother-daughter relations" and it's as good an introduction as any to Willis' light-hearted and sometimes madcap writing. She also has a more more somber, even tragic, side with stories such as "The Last of the Winnebagos" and "A Letter from the Clearys." "Fire Watch" is a short story introduction to her works that feature time travel by history students and faculty of a future University of Oxford; noteworthy novels in that series are Doomsday Book and the To Say Nothing of the Dog.

"Story of your Life" Ted Chiang (1998)

Although the decline of science fiction magazines has arguably helped lead to a decline of shorter science fiction overall, the well-awarded Chiang has never written a full-length novel. Moviegoers may recognize this novella as the basis for the 2016 film Arrival. (The film keeps the basic structure of the story but adds what are probably best described as Hollywood elements.) The story is narrated by linguist Dr. Louise Banks who is called in to communicate with alien visitors. It brings in questions of free will and the nature of time but it's another story that rewards discovering its structure on your own. Chiang has many fine works but particularly noteworthy is the novella "The Lifecycle of Software Objects" and the short story "Tower of Babel."

"Taklamakan" Bruce Sterling (1998)

Sterling is one of the founders of the cyberpunk movement in science fiction. Simone Caroti calls this story "an intelligent reworking of the generation starship concept for the information age" (which it combines with an espionage angle). This story is one of Sterling's loosely linked Chattanooga stories of which "Bicycle Repairman" is also particularly recommended. Among his Shaper/Mechanist stories, "Cicada Queen" is a good place to start.

"The Janitor on Mars" Martin Amis (1998)

Amis is a British writer who has written very little science fiction; indeed this story, which was originally published in The New Yorker, is the only science fiction in the Heavy Water and Other Stories collection where it appears. A robot makes contact from Mars and reveals the  truth of mankind's place in the Universe. It's funny and melancholy at the same time. 

"A Colder War" Charlie Stross (2000)

This alternate history novelette fuses real and inspired-by characters and events of the Cold War and H. P. Lovercraft's Cthulhu Mythos. It's a thoroughly imaginative mash-up though your reading will probably benefit by some familiarity with Lovecraft's work. He also examines the Cold War in his 2006 novella "Missile Gap." "Rogue Farm" imagines a world of biological fabricators, eight legged cows, talking dogs, microscopic surveillance bots, mid-life genetic upgrades, and something akin to Larry Niven's stage trees.

"The Fluted Girl" Paolo Bacigalupi (2003)

Bacigalupi often explores the effects of bioengineering, water shortages, and fossil fuels no longer being viable. His short stories are collected in Pump Six and Other Stories which are collectively memorable but bleak. This story starts with the title character "huddled in the darkness clutching Steven’s final gift in her small pale hands. Madam Balarie would be looking for her. The servants would be sniffing through the castle like feral dogs." Other stories in the volume, probably best read with appropriate spacing like episodes of "Dark Mirror," include the novelette "The People of Sand and Slag" and the novella "Pump 6."

"Beyond the Aquila Rift" Alastair Reynolds (2005)

Two short stories by Reynolds were the basis of a couple of the better episodes in the (recommended) Netflix anthology series "Love, Death & Robots." One of them was this one about a space jump gone wrong. The other was "Zima Blue." Other stories in the eponymous collection are worth checking out in addition to the longer novella "Diamond Dogs" set in Reynolds' Revelation Space universe.

"The Egg" Andy Weir (2009)

This was about a thousand word story that Weir describes as something he pounded out in about 40 minutes. It's a fun little story so I don't think I'm including it unreasonably but its viral popularity also helped lead to Weir's success with his hard science fiction novel The Martian, which was made into a popular Hollywood movie. 

"The Fermi Paradox is our Business Model" Charlie Jane Anders (2010)

This story is a fun take on why the previously mentioned Fermi Paradox might be the result of deliberate outside actions. Aliens may be involved. And they may not be benevolent. Anders has also won a Hugo for her novelette "Six Months, Three Days."

"Sinners, Saints, Dragons, and Haints in the City Beneath the Still Waters" N. K. Jemisin (2010)

Much of Jemisin's writing concerns cultural conflict and oppression. This story is set in the Ninth Ward of New Orleans in the aftermath of Hurricane Katrina. Heather Rose Jones writes that "What makes this story great is the immersive voice and language, the way descriptions of everyday surroundings slide easily into the fantastic, and the way the folklore of cities and peoples is woven into a new mythos." Many of Jemisin's shorts are collected in How Long 'til Black Future Month?

"Mono No Aware" Ken Liu (2012)

This touching Hugo-winning story is about a ship fleeing the destruction of earth as told through Hiroto Shimizu's attempt to teach someone else the Japanese culture even if he himself doesn’t know much about it anymore. Liu's "The Paper Menagerie" is even more awarded but I didn't care for it as much as this one. I also particularly like "Perfect Match."

"Curse 5.0" Liu Cixin (2013)

Liu Cixin is best known for his novel, The Three-Body Problem, and its sequels. This short story is distinguished by its dark humor concerning the life of a computer virus. The same collection also contains longer stories such as "Mountain," about alien first contact, and "The Wandering Earth," which became the basis for the first big-budget Chinese science fiction blockbuster. At least some of Liu's work is reminiscent of earlier science fiction authors like Arthur C. Clarke where the focus is more on exposition than characters.

"Inventory" Carmen Maria Machado (2013)

This is yet another story about which I hesitate to say much of anything. It's more literary than genre and to even say why it's connected to science fiction would be to cheat the reader.

"Ambiguity Machines: An Examination" Vandana Singh (2015)

In this story, we learn of machines that blur boundaries and confound the laws of physics. It is presented with three loosely-linked stories of impossible machines in remote areas of the earth which characters hope to use to be reunited with a person they love or to find their way home. Vandana Singh has a unique style about which Matthew Cheney writes "history, myth, science, and storytelling all infuse each other while the personal and cosmic sit side by side." This story appears in the collection Ambiguity Machines and Other Stories whose stories play off of each other in various ways although you can read them standalone.

"Thirteen Ways of Destroying a Painting" Amber Sparks (2016) 

One reviewer described the collection in which this story appears as "an off-kilter universe of almost-fairy tales with equal parts beauty and melancholy.” This is a time travel story but one told in a very original and very spare way. Another favorite is "Cemetery for Lost Faces," likewise in the collection The Unfinished World: And Other Stories. Many of Sparks' stories have science fiction or fantastical elements although her writing is broader than that.

"The Ghost Ship Anastasia" Rich Larson (2017)

I like a Reddit blurb on this: "Drugs, bioships, and batshit AI in a deep space horror/love story." Like Bacigalupi, Larson strongly tends towards the dark side; see also "Painless." But he's a relatively young author whose work appears in numerous Year’s Best anthologies and is well-worth perusing.

Some of the text in this post is adapted from Wikipedia and The Encyclopedia of Science Fiction by John Chute and Peter Nicholls.

Hacker News thread on earlier 50 author version: https://news.ycombinator.com/item?id=23959648