Henry Blodget is a smart journalist who knows how to drive pageviews.Â He certainly got me to click through when he picked the headline, “DEAR ENTREPRENEURS: Here’s How Bad Your Odds Of Success Are”
Blodget riffs on a tweet by Paul Graham to estimate the odds of startup success:
“Graham says that 37 of the 511 companies that have gone through the Y Combinator program over the past 5 years have either sold for, or are now worth, more than $40 million.
Most entrepreneurs would probably view creating a company worth more than $40 million as a success (unless the company raised more capital than that). And, on its face, the “37 companies” number seems relatively impressive.
In fact, however, the number tells a scary and depressing story.
This number suggests that a startling 93% of the companies that get accepted by Y Combinator eventually fail.
If only 37 of the companies that have applied to Y Combinator over the years have succeeded, this is a staggeringly low 0.4% success rate.
Put differently, only one in every 200 companies that applies to Y Combinator will succeed.”I don’t contest Blodget’s numbers. Y Combinator is a great program, and I agree that it ought to give its companies an above-average chance of success.What I do contest is the definition of success.Â One of the YC companies I was an investor in was AppJet, which produced EtherPad.Â AppJet was sold to Google for less than $40 million.Â Yet all the investors in the deal made money, and the founders became millionaires.
Moreover, the founder of AppJet, Aaron Iba, went on to become…a partner at Y Combinator.Â It sure doesn’t seem like Paul Graham was disappointed with him.
If you raise $10 million, any exit under $40 million is a disappointment.Â But if you’re capital-efficient and build a real business, even a smaller exit can be a great outcome for everyone involved.
I’d love to know how many YC companies made their investors money–that’s a far better measure of success.
(Image credit: Bigstock)
(Cross-posted @ Adventures in Capitalism)
I have a triple dose of videos for you today! The AWS team is growing at a rapid pace and we're looking for great people to fill many different positions. In order to give you a better sense for the jobs and the kinds of people that you'd be working with, I spent some time interviewing some of my colleagues. I'll be publishing the videos over the course of the next couple of weeks.
I interviewed members of our professional services and development teams. I also interviewed the leader of our support organization. The AWS Careers Page contains additional information about each of the job families. We have open positions in North and South America, Europe, Africa, and the Asia-Pacific parts of the world.
If you would like to apply for any of the jobs, please use the email address associated with the job family. I'd also like to ask you to take a moment to fill out the survey.
I spoke with Matt Tavis to learn about his responsibilities as a member of the AWS Professional Services team:
I spoke with Andrew Dickinson to learn about his job as a senior software development engineer (SDE):
Brent Jaye runs the team that provides AWS support to customers around the globe. He is hiring for a multitude of positions:
I hope that you enjoy these videos, and that they give you a glympse of what it is like to work on the team.
Let's take a quick look at what happened in AWS-land last week:Monday, June 10
- We announced an Amazon RDS Price Reduction for On-Demand and Reserved Instances.
- The AWS Security Blog published part two of a series on Securing Access to AWS Using MFA.
- We added Red Hat Enterprise Linux (RHEL) to the AWS Free Usage Tier.
- We announced that Amazon CloudFront now supports Custom SSL Domain Names and Root Domain Hosting.
- The AWS .NET Blog talked about Connecting to Amazon EC2 Instances Using the AWS Toolkit for Visual Studio.
- We published an AWS case study for game developer Supercell.
- The AWS Java Blog shows how to Implement Rate-Limited Scans in Amazon DynamoDB.
- The AWS Mobile Blog discussed Timeout and Retry Options in the AWS SDK for iOS.
- AWS Marketplace and Jaspersoft started a promotion, get a $175 AWS EC2 credit if you start a subscription between June 15 and July 31 and use the product for a minimum of 200 hours.
- This week AWS Marketplace added new products including Porticor, Univa, CloudCheckr, FedTax, Solano Labs and Virtual Solutions.
Your #1 Sales Rep Should Be Driving an M6 Convertible By Month 12. (And Not Buying a Panerai Watch.)
I want to spend a few posts and some time on sales comp plans for early-ish stage SaaS companies (up to say $20m in ARR). Â Because almost all the sales comp plans you are going to read about, and learn about are great â€” for SaaS companies that are well post-Scale. Â They work great for Salesforce, or Box. Â Or for companies that are investing huge amounts in sales & marketing like Yammer did. Â But they probably wonâ€™t work for you until you are Bigger. Â Youâ€™ll waste a ton of money and not learn enough.
Iâ€™m going to propose a framework for you in a subsequent post.
But before we get there, let me suggest one simple way to think about your sales comp plan: Â your top rep should be driving an M6 Convertible. Â Just the top rep. Â And not when you hire him or her (you want to hire hungry reps, especially to start). Â But by 12 months or so down the road.
What do I mean?
Well, broadly speaking hereâ€™s what you want your first real sales rep comp plan to actually accomplish:
- The comp plan must be nominally competitive with peer companies. Â If itâ€™s not, you wonâ€™t get the good ones. Â You canâ€™t cheap out. Â Youâ€™ll get candidates, but youâ€™ll end up with dregs if you do.
- Your top 1-2 reps should be able to just kill it. Â Make a ton of money. Â And buy an M6 Convertible. Â Because you want them to prove it works, your sales and business model. Â To prove it to everyone else, without a doubt. Â Maybe there will only be one LeBron on your team at first. Â But you need one. Â One that is so good at selling your product, he or she not only closes a ton of business â€” but is so confident that he or she can continue to sell your product that buying an M6 convertible is just a downpayment on an even greater future as a salesperson at your SaaS start-up. Even great salespeople that donâ€™t believe donâ€™t sign four-figure monthly leases.
- The mid-packers shouldnâ€™t be interested. Â You donâ€™t want an incentive structure where the losers hang around on the sales team. Â That will not only waste capital and more importantly leads â€” but youâ€™ll get confusing data. Â You want a plan where they cycle out.
- The pretenders should cycle out as well. Â These are the guys that talk the talk, but canâ€™t close the customer, at least not enough, at least not without say the entire Salesforce brand [or insert other Big Leader here] and apparatus behind them. Â My tell-tale sign here? Â The Panerai watch. Â The $10,000 watch. Â But without the M6 Convertible (or worse, paired to a dated AMG sedan from 1-2 generations ago). Â Why? Â The winners know they can continue to win. But even the pretenders eventually have one good quarter. Â One good bonus. Â And buy the $10k watch. Â But not the $100k car. Â Because they know it was luck, or at least, that they arenâ€™t good enough to sustain it. Â So these guys always want a (x) big guaranteed base salary plus (y) a draw for X months. Â Avoid them like the plague.
Can you judge the rep by the watch? Â I know thatâ€™s superficial. Â I know it sounds lame. Â I know there are many exceptions that make the rule.
But sales is about money, especially at the individual contributor level. Â Earning it, chasing it, closing it, living it.
So this seemingly superficial tell? Â I think itâ€™s true.
Coming up next here: Â an initial sales comp plan that can help achieve these goals.
(Cross-posted @ saastr)
Big Data remains one of the most talked about technology trends in 2013. But lost among all the excitement about the potential of Big Data are the very real security and privacy challenges that threaten to slow this momentum.
Security and privacy issues are magnified by the three Vâ€™s of big data: Velocity, Volume, and Variety. These factors include variables such as large-scale cloud infrastructures, diversity of data sources and formats, streaming nature of data acquisition and the increasingly high volume of inter-cloud migrations. Consequently, traditional security mechanisms, which are tailored to securing small-scale static (as opposed to streaming) data, often fall short.
The CSAâ€™s Big Data Working Group followed a three-step process to arrive at top security and privacy challenges presented by Big Data:
- Interviewed CSA members and surveyed security-practitioner oriented trade journals to draft an initial list of high priority security and privacy problems
- Studied published solutions.
- Characterized a problem as a challenge if the proposed solution does not cover the problem scenarios.
Following this exercise, the Working Group researchers compiled their list of the Top 10 challenges, which are as follows:
- Secure computations in distributed programming frameworks
- Security best practices for non-relational data stores
- Secure data storage and transactions logs
- End-point input validation/filtering
- Real-Time Security Monitoring
- Scalable and composable privacy-preserving data mining and analytics
- Cryptographically enforced data centric security
- Granular access control
- Granular audits
- Data Provenance
The Expanded Top 10 Big Data challenges has evolved from the initial list of challenges presented at CSA Congress to an expanded version that addresses three new distinct issues:
- Modeling: formalizing a threat model that covers most of the cyber-attack or data-leakage scenarios
- Analysis: finding tractable solutions based on the threat model
- Implementation: implanting the solution in existing infrastructures
The full report explores each one of these challenges in depth, including an overview of the various use casesfor each challenge.
The challenges themselves can be organized into four distinct aspects of the Big Data ecosystem as follows:
The objective of highlighting these challenges is to bring renewed focus on fortifying big data infrastructures.Â The Expanded Top 10 Big Data Security Challenges report can be downloaded in its entirety here.
Last night, at the 6th Annual International Datacenters Awards in London Senior Soutions Architect +Sam Mitchell brought home the award for Public Cloud Services & Infrastructure.
Winner! International Datacenters Awards - Public Cloud Services & Infrastructure
The Public Cloud Services & Infrastructure Award recognises a company with a demonstrable track record that can be used as an exemplar to the global datacentre industry.
CohesiveFT won the award for the company track record and the software-defined networking (SDN) product, VNS3. VNS3 gives customers connectivity that allows businesses to extend an existing network to any cloud environment.
Runner Up - Datacentre Solutions Awards CohesiveFT was also recently named runner up the Data Centre Solutions Award for Public Cloud Project of the Year.
The recognition is shared with CohesiveFT UK based partners. The solution that earned the nomination focused on migrating and connecting public utility data to the public cloud to reduce IT overhead.
The award recognizes the best implementation of a cloud computing project (public cloud, private cloud or hybrid cloud) in any public-sector organisation with tangible benefits in either cost savings or efficiency gains.
The solution that got runner-up helped a customer organize and visualize 20+ years of information, or nearly 250M data points relating to 25 million UK households and have cloud compute capacity in the IBM SmartCloud.
The customer was able to migrate and connect their cloud and physical data centers. Now the customer can use compute functions that are now accessible at any time, from anywhere, yet still retain all the functionality of a traditional enterprise-level solution. The customer used the IBM SmartCloud as the most ecologically sound way of rolling out software, with a dramatically reduced their carbon footprint.
Interested in hearing more from the CohesiveFT team? Come find us in June:
CTO +Chris Swan at ODCA Forecast
17 - 18 June
Panel: Software Defined Data Centers
14.15 - 15.15pm June 17 in Colonial Room - Mezzanine Level
Moderator: Jo Maitland
Agenda and event info
CohesiveFT at Cloud Computing World Forum
26 -27 June
Agenda and event info
Demos with +Sam Mitchell at Stand 3050:
- 10.00 & 14.00 Control and Security in the Cloud - how to take control and secure your data in the cloud?
- 10.45 & 14.45 What is Software Defined Overlay Networking? - an introduction to VNS3 the market leading overlay SND solution
- 11.30 & 15.30 D elivering applications to your cloud extended network - import, transform and deliver applications your chosen cloud
- 12.15 16.45 etting started with VNS3 - Demo of using VNS3 Free Edition from AWS Marketplace
- Panel: CloudCamp London Preview - Network Virtualisation, SND, NfV what's it all about? - 11:45 in the Executive Track
- Panel: How is Cloud Changing Your Data Centre? - 26 June at 16:20 - Evolution Theatre
- Panel: The Role of Cloud in the Internet of Things Transition - 27 June at 16:20 in the Connection Theatre
CloudCamp London - 'Network Virtualisation, SDN, and NfV - what's all the fuss?'
18.30 at The Crypt in Clerkenwell, London
Organized by +Chris Purrington
Conventional wisdom and research studies both say that communication is necessary to gain successful project outcomes. However, some (too many) executives use their public relations and communications departments as a megaphone to broadcast one-way directives in the name of “dialog.” This behavior represents self-delusion and not genuine communication.
Image credit: iStockphoto
A Project Management Institute (PMI) study finds that organizations risk “$135 million for every billion dollars” they spend on projects.Â Of thisÂ large sum, related research from PMI concludes that “ineffective communications” drives 56 percent ($75 million) of these at-risk dollars.Â Based on these numbers and empirical experience, it is clear that communication plays a significant role in the success or failure of projects.
For IT projects, communication challenges arise because new technology forces organizations to change theirÂ processes and job functions. Especially on broad projects such as ERP, process change is a fundamental part of the implementation. However, streamlining processes is also important on technology initiatives that are narrower in scope, such as CRM. Because process improvement is so important,Â communications and change management are a standard part of every well run enterprise software deployment.
In a previous column on change management, I note the need for project communication:
Managing transformation and change is one of the most difficult aspects of enterprise software implementations. In my study of IT failures, poor communication ranks high on the list of key issues that cause problems on large projects.
The PMI studies give voice to the truism that communications is critical to project success, but the concept itself can be problematic. The Merriam-Webster dictionary defines communication as:
- an act or instance of transmitting
- information transmitted or conveyed
- a process by which information is exchanged
Many executives treat communication and change management according to definitions one and two, ignoring the third, and by far most important, point. For these executives, communications could more accurately be called “management directives pretending to be discussions.”
A study on the ROI of communications and change management, by consulting firm Towers Watson, demonstrates an emphasis on transmitting information rather than fostering dialog and discussion. The following chart shows how the report defines “effective communication.” The following chart shows how the report defines “effective communication,” with most points implying one-way transmission of information:
Towers Watson’s definition of communication in a change context illustrates the point even more strongly. In this graph, communication is clearly a one-way flow rather than a bi-directional dialog:
The Towers Watson approach mirrors conventional wisdom so I am not intending to single them out. However, it is time for executives to add careful listening to their arsenal of strategic skills; I would argue that treating communications as a one-way transmission contributes to many IT failures.
Image credit: iStockphoto
Without a bi-directional flow of information, you will not engage employees, gain their buy-in, or ensure they actually understand the communications you send.Â Instead, shift your communications paradigm away from transmitting information to cultivating collaboration and knowledge sharing.Â Learning to listen is the key skill in this new paradigm.
(Cross-posted @ ZDNet | Beyond IT Failure Blog)
In the enterprise market much of the adoption for public cloud IaaS services so far has been driven by innovators and early adopters.Â One of the defining characteristics of these early adopters is their willingness to accept and manage risk.Â These risks can come in many forms, including technological, organizational, operational and financial.Â Financial risk around cloud adoption for enterprises is driven by two major sources – financial liability due to potential loss of data stored with the cloud service providers, and loss of potential revenue or business capability due to cloud service downtime.
On the surface, the idea of providing insurance specifically for cloud service providers to help enterprises address these business and financial risk would seem to make sense.Â Â During the IT outsourcing wave of the last decade, many service providers found they needed to buy insurance as they assumed the liability of operating their customers IT infrastructure. Â Â In addition cybersecurity insurance, which addresses the financial impacts of data security breaches, has been in the market since the late 1990′s. While some believe that cybersecurity also covers cloud services, cloud computing models are creating unique challenges and risks.
One of the big barriers to cloud insurance has been the lack of data on security incidents and resulting financial claims.Â Ironically the more breaches and security incidents that occur, the easier it becomes for insurers to offer coverage.Â Â With more data points both on incidents as well as the financial settlements that result, insurers are able to more effectively assess exposures, price risk and determine premiums.Â Amassing the required data points and analytics required is a function of cloud adoption and time.
It seems like we may have reached this critical mass, as evidence that a market for cloud insurance is emerging.Â Just last week Liberty Mutual, the third largest property and casualty insurer in the US, announced that it will be offering cloud insurance policies in partnership with CloudInsure, which will provide a data and analytics platform for assessing cloud service provider risks.Â In addition to Liberty Mutual announcement, cloud insurance capacity is starting to appear with other providers as well, including through the MSPAlliance which offers products for its members.
The emergence of cloud insurance products Â could change the enterprise cloud market in several interesting ways.
- Shifting the CFO / CIO balance of power - when a CIO claims that cloud services aren’t secure enough for their organization, it’s difficult for business executives to push back.Â The availability of cloud insurance products could help to change that dynamic. Â Â Just as insurance companies have strict underwriting processes and requirements for other types of products, so it will be for cloud.Â Cloud service providers and users will need to adhere to a set of policies, procedures and controls that meet the requirements of insurers.Â As it’s their money at risk, you can bet the underwriter’s standards will be high.Â Â With insurance products available, CFOs will now have third party validation they can point to around the security of cloud services.Â While the CIO still may have valid security or compliance concerns, the existing of insurance will change the dynamic of that conversation.
- Opening of new market segments – as technology markets mature, adoption becomes increasingly driven by more conservative buyers.Â While it doesn’t address the complete spectrum of risks enterprises face when they migrate to cloud services, insurance Â will make a segment of customers feel more comfortable migrating to public cloud models.Â Â We may end up finding that there will be certain segments of the enterprise market for which cloud insurance will be a requirement for doing business.
- Changing application migration dynamics – while most cloud service providers offer compensation for SLA violations today, to say most of them are toothless is probably an understatement.Â Customer acceptance of availability and performance risk is considered part of the trade-off for the flexibility and cost benefits of public cloud.Â Increasingly customers are expected to architect their applications for redundancy and resiliency, and for applications to failover to other service provider data centers or regions.Â Insurance may make it more palatable to migrate non cloud-architected applications to public cloud providers, assuming the business case is there.
How the cloud insurance market evolves will be driven by a number of factors.Â As the de facto leader in public cloud IaaS, any move that Amazon makes will be quickly followed by the market.Â Â Cloud service providers offering insurance at point-of-sale to customers could be another potential game-changer.Â Regardless of how it evolves, cloud insurance will likely play an interesting role in the enterprise cloud market.