Novel Ideas

If ideas are a function of priors and inputs, then it is paramount to (i) reduce reliance on priors; and/or (ii) increase quantity / quality of inputs.

If people are vessels through which ideas propagate, then it is essential to be connected to the “right” set of nodes.

Pre-EF

Having spoken with a pretty representative sample of the upcoming LD19 cohort at Entrepreneur First, I am astonished by the sheer amount of talent, ambition, and drive emanating from these individuals.

My interpretation of the EF investment thesis is: if you throw a bunch of talents into a pot, apply pressure, and stir, novel solutions to current and future problem spaces will emerge. Now that I am connected to a sizeable set of high quality nodes, the next step is to identify problem spaces in which there is strong idea-founder fit in the solution(s).

On Problem Spaces

Problem spaces can be roughly divided into two (practically useful) categories, with varying degrees of certainty: present problems and future problems.

Present Problems

Present problems are more concrete - there exists a set of entities1 who are currently experiencing visceral frustration with fulfilling a certain desire.

“I want X, but the process to achieve X is {negative emotion}”.

Relevant frameworks here are The Mom Test and Jobs to be Done. The existence of present problems are more falsifiable, and customer discovery/development reach out (if done properly) can be used to ascertain whether there is value in solving said problems. However, due to the relative certainty of present problems, the likelihood of a problem space being explored or addressed by predecessors or contemporaries is pretty high.

Therefore, one should consider the Idea Maze to minimize redundant exploration, to stand on the shoulders of giants, while keeping in mind that execution heavily trumps ideas. Even if the problem is presently experienced, does the current state of technology enable an offering which exceeds the minimum usefulness threshold?

Future Problems

Future problems carry far more uncertainty - it requires sufficient understanding of current trends in SOTA technology, extrapolations thereof, and the potential implications of its interplay with the broader socio-politico-economic context.

A problem might only seem problematic for the future when viewed through the lenses of the present.

  • What if the attitudes surrounding that problem shift entirely in the future, due to unforeseen changes in social/economic attitudes/incentives?
  • Are there problems fundamentally rooted in the human condition, which will likely remain time-invariant?
  • How can we estimate demand for a market yet to exist / in its infancy?
  • What are some problems which will arise as a result of a structural change in society, or some groundbreaking technology?2

I conjecture that the expected value of companies built to solve future problems is higher - less competition at the problem space discovery phase, higher risk appetite requirement for those who choose to tackle them3, bigger slice of the pie yet to be fully baked.

The best problem spaces likely lie in the intersection of both. Problems which currently have a non-zero area of “pain intensity” x “no. of experiencers”, which are likely to undergo significant increases in either or both axes in the near to mid term future. Scale AI was predicated on high-quality data being the lifeblood of ML/AI, and that ML/AI adoption will continue to increase. Great hunch, great execution, idea-founder fit, right to win, 🦄.

Narrowing the Search Space

Given limited time, in both EF and life, the opportunity cost of working on problem spaces / ideas without founder-idea fit is immense.

EF espouses edge-based ideation, whereby you consider the sum total of your experiences / expertise, stack rank your competitive edges, and ideate within the contraints of your primary (and/or secondary) edge(s).4

Within the context of an EF cohort (microcosm of the wider startups ecosystem), there must be a subset of problems one is best positioned to solve.

Edge-based ideation is a useful starting point, and a practical heuristic for evaluating right to win for a particular problem space. It helps narrow the search space significantly, informing you of the problems you shouldn’t expend time on; instead, directing your efforts towards navigating the problem mazes conceivably relevant to the intersection between your edge(s) and your potential cofounders'.

Edge also enables the conception of differentiated insights, especially when considering the trajectory of technology and its impact on different markets / ecosystems / communities - a perspective that seems obvious to you, may not be the case for others without your life experiences, and vice versa.

In my experience talking to potential cofounders, I’ve noticed this seemingly alchemical emergence of ideas from edge-based ideation. Cofounder matching seems to be a highly intuitive process - ideation just feels natural with some people. But insights can arise from any conversation, even with those working in vastly different industries - Socratic-style inquiry may help reduce reliance on priors. I’m now considering problem spaces I hadn’t previously considered, due to the marked increase in both quantity and quality of external inputs.

“Strong Beliefs, Weakly Held”

  • Moore’s law (in terms of computational capabilities) will hold -> cost of AI inference will decrease -> you’ll eventually be able to run Foundation Models on your phone
  • Neural Network architecture will become more modular -> staying power for Foundation Models will increase as modules are built on top of them -> staying power of model-specific prompts will increase -> vibrant prompt economy (until models get sophisticated enough to minimize the loss between human intent and the written medium5)
  • Personalised AI assistants will provide coaching / suggestions for one’s personal and professional life -> these models will be dynamic, and will continually learn from your data -> there will be infrastructure for this feedback loop
  • Following the above, instructable cognitive helpers for all office tasks will become widespread -> everyone gets an intern (which might progressively automate more of your work)
  • Decentralised, community-owned Foundation Models will emerge6 -> inference will be run on community-owned hardware, with profit-sharing incentives for the initial crowd-sourcing -> might be run as a DAO
  • Guardrails for AI will be necessitated7 - LLMs today are still capable of toxicity and incoherence, can that be minimized with some second order models?
  • Foundation Models will be able to perform strict logical reasoning and even fuzzy reasoning -> democratization of preliminary legal advice8
  • Creative generation tasks and workflows will be expedited -> decrease in difficulty and skill barrier in creating/editing 2D/3D assets and videos -> implications on all industries relating to digital media (UX, marketing, film, video games etc) -> new economies / marketplaces will be created for these assets
  • Synthetic media will become indistinguishable to both the human eye and detection algorithms -> implications of unreliable “truth” in digital media -> how can images / videos be trusted again? (especially in the context of litigation)
  • Cheap bots that pass the conversational Turing Test will become widespread -> unreliable narratives / interactions online, social engineering schemes will increase in volume -> sure, Soulbound Tokens / Proof of Humanity can be a partial solution, but are there sufficient incentives for its adoption?
  • Personalised video narration can be generated from any arbitrary text -> Reading a dense paper? Have it narrated by an avatar with a realistic voice. Listening to a podcast without video? Automatic transcription + deepfake video generation of the host and the guest
  • Dialogue and/or scripts can be generated from the description of a narrative -> video in any style can be generated from scripts -> endless interdimensional cable
  • Code completion models will become extremely accurate, as code generated in response to natural language instructions can be tested, creating a bootstrapped supervised learning dataset -> programmers will be programming at an even higher level of abstraction / no code tools will dramatically improve (so long as one can adequately describe business logic) -> further reduced barrier to entry for entrepreneurship
  • Endless more. Might append at convenience.

That’s just a subset of my AI-related beliefs9 - there are also anticipated developments in Web3, shifts in socio-cultural attitudes10, future of work, macroeconomy etc.

Now, to find that problem space, be it present or future, where my cofounding team has the right to win.

Live in the future, and build what’s missing. Notice the trends, and build at the frontier.


  1. Entities encompass individual persons and collectives. ↩︎

  2. Arguably, a black swan event in AI media synthesis. Democratization of AI which can be run on consumer-grade hardware will cause ripple effects in downstream industries, verticals, and the macroeconomy. ↩︎

  3. As if the risk appetites of entrepreneurs aren’t already high enough. ↩︎

  4. Might be slightly butchering it, refer to the linked article for the “official” explanation. ↩︎

  5. Or instructions can be somehow communicated without going through the lossy compression of natural language. ↩︎

  6. Economic incentives for this will make more sense if staying power for Foundation Models increase. ↩︎

  7. If there is more regulatory certainty for AI, both training and inference, then tools can be created for AI compliance. ↩︎

  8. Even just writing this out, I think a subset of legal procedures are decision trees up to a certain point. Beyond that point, bespoke legal advice is required. Errors can be catastrophic. Who do you sue if there is adverse reliance on legal advice? ↩︎

  9. Although I might be horribly misguided. ↩︎

  10. To name a few: Increased sense of social isolation for some, increased desire for community / human touch, cultural divide between AI-adopters / transhumanists and AI-rejectionists, potential decrease in some aspects of human cognition due to overreliance on AI / tech. ↩︎