Chapter 19 — Week 1: Idea generation, team formation, and the customer pain score
Welcome to Week 1. By Friday you will have run a 30-idea brainstorm through a four-dimensional pain score, completed at least five pre-validation calls, picked your top idea, formed your team, and signed a founder agreement. This is the highest-leverage week of the semester: a poor idea or a fragile team will not be rescued by a brilliant pitch deck in Week 9.
Chapter overview
This chapter has six parts, the same six parts every Part V chapter will have. §19.1 (Concept) sets out the theoretical apparatus: where AI startup ideas come from, the AI-wrapper critique, the customer pain score, founder-market fit, and team formation theory. §19.2 (Method) gives you the week-1 sprint — a day-by-day specification you can execute against. §19.3 (Lessons from the cases) pulls eight specific lessons from the analytical chapters of Parts I–III, citing the chapter where each was developed. §19.4 (Tools and templates) gives you the worksheets, scripts, and legal templates to use this week, with separate notes for KL and Melbourne contexts. §19.5 (Worked example) walks through a composite team’s actual Week 1 from 32 candidate ideas to a signed founder agreement. §19.6 (Course exercises and deliverables) specifies the Week 1 deliverable with grading rubric.
How to read this chapter. Read §19.1 in full before your first team meeting on Monday. Read §19.2 with the team and decide who owns each step. Read §19.3 individually — the case lessons will save you from the most common Week 1 mistakes. Use §19.4–§19.5 as you execute. Submit against §19.6 by Friday 23:59.
19.1 Concept
19.1.1 Three sources of AI startup ideas
There are three durable sources of AI startup ideas, and most successful AI startups draw from at least two.
Pattern recognition from existing markets. You read the case material in Parts I–III as an opportunity map. You notice that financial-services AI deployment is mature in customer service and middle-office work but immature in compliance reporting. You notice that healthcare clinical AI is concentrated in radiology and dermatology while specialty domains like ophthalmology, GI, and infectious disease are under-served. You notice that retail personalisation is dominated by the Amazon-Shopify duopoly while the marketplace-of-marketplaces (Carousell, Carsome, Mudah) layer is barely instrumented. Pattern recognition is what graduate-level case study reading produces — and it is the source of most B2B AI startups in 2024–2026.
Personal pain — “scratch your own itch.” You build the thing you wish existed. Mel Perkins (Canva, founded 2012 in Perth, headquartered Sydney since 2013) was teaching graphic design at the University of Western Australia and was frustrated by how long it took students to learn Adobe InDesign for basic layouts. Mel and Cliff Obrecht built Fusion Books — a yearbook design tool — first; Canva came later, with the same observation that “graphic design is artificially hard and expensive.” Cursor’s founders (Anysphere, founded 2022) were heavy users of GitHub Copilot who thought the IDE-around-the-model could be much better; they built what they wanted to use. Personal pain is renewable: it produces ideas you understand at depth and users you can recognise.
Adjacent-to-job opportunities. You work in industry X, and notice that ML application Y would dramatically improve a workflow you understand from inside. Harvey AI’s founders (Winston Weinberg and Gabriel Pereyra, 2022) had legal and ML backgrounds respectively; they noticed that big-law associates spent most of their billable hours on document review and citation work that was structurally susceptible to LLM augmentation. The same pattern produced Glean (ex-Google enterprise search engineers building enterprise RAG), Sierra (ex-Salesforce executives building agent-native customer service), and Cognition Devin (Stanford ML researchers building autonomous SWE).
The strongest founder ideas combine at least two of these sources. Cursor’s founders had pattern recognition from the AI factory’s emerging shape and personal pain from being daily Copilot users. Harvey’s founders had adjacent-to-job knowledge and pattern recognition from watching the post-GPT-3 wave commoditise text generation. Pulse Asia’s founders (composite case in §19.5) will have pattern recognition from sector reading and personal pain from being students who experienced the Malaysian secondary-education tutoring market themselves.
A Week-1 idea generated from only one source — typically pattern recognition without personal experience — is the most common path to the AI-wrapper failure mode (§19.1.2). Patterns without lived experience tend to produce ideas that are technically plausible but commercially shallow.
19.1.2 The AI-wrapper critique
In 2024–2026, the most common pejorative description of an AI startup is “GPT wrapper” — meaning a thin layer of UI on top of a foundation-model API, with no defensible source of advantage. Most idea-stage AI startups are wrappers in their first conception; the question is whether you can identify, in Week 1, the path by which you will stop being a wrapper.
There are exactly four escape paths from wrapperhood, each well-grounded in the AI factory framework (Chapter 3) and the strategic-advantage analysis (Chapter 5).
Path 1: Proprietary data. You build a data asset that competitors cannot easily replicate. Examples: Tessian (now part of Proofpoint) built proprietary data on enterprise email behaviour; Tractable built proprietary data on car-damage assessments; Atomwise built proprietary data on small-molecule binding. The path requires that (a) the data is generated as a byproduct of your operations, not bought from third parties; (b) the data network effect is strong (Chapter 3, §3.15) — your model is genuinely better with more data; (c) data accumulates faster than competitors can reach you. Most data-moat AI startups take 24–36 months to make the moat material.
Path 2: Distribution. You reach customers your competitors cannot. This is the entrenched-incumbent path: GitHub Copilot was hard for Cursor to dislodge because GitHub had every developer’s repo already. The reverse is also possible: Cursor reached developers via word-of-mouth and product quality and built a distribution moat against Copilot. Distribution moats can come from existing user relationships, channel partnerships, regulatory licences, or community adoption. They are typically the easiest moat to describe and the hardest to build.
Path 3: Workflow integration. You embed so deeply into a customer’s workflow that swapping you out is more expensive than tolerating you. This is the JPMorgan COiN pattern (Chapter 6, §6.11): the tool is integrated with internal case-management systems, audit trails, role-based access controls, and the bank’s lawyers’ daily routine. A new entrant would have to replicate not just the AI but the integration. Workflow moats are particularly strong in regulated and enterprise B2B contexts where switching costs are high.
Path 4: Regulatory and compliance. You build advantage by getting through a regulatory process that others have not. FDA-cleared medical AI (Chapter 7) is the canonical example: a CE-marked or FDA-510(k)-cleared algorithm has commercial value that an unverified algorithm of equivalent technical performance does not, because hospitals and insurers can buy and deploy it. The same holds for SOC 2 / ISO 27001 / HIPAA / PDPA compliance for enterprise B2B; for MAS / BNM / OJK fintech licences in financial services; and increasingly for EU AI Act compliance documentation (Chapter 14) for high-risk applications. Regulatory moats are typically slow to build but durable once built.
A team in Week 1 cannot have built any of these moats. What you can identify is which moat you are aiming for. A team that cannot answer the question “which of the four paths is yours?” with specifics — not aspiration — is still a wrapper. Use this discipline ruthlessly when scoring ideas.
19.1.3 The customer pain score
The customer pain score is a four-dimensional rubric for triaging ideas in Week 1. It is deliberately rough — the intent is to filter the 30-idea brainstorm to the top 10 candidates worth pre-validating, not to decide your Series A pitch.
The four dimensions:
| Dimension | Question | Score 1–5 |
|---|---|---|
| Severity | How painful is this problem to the customer when it occurs? | 1 = mild annoyance; 3 = real cost or frustration; 5 = mission-critical |
| Frequency | How often does the customer hit this pain? | 1 = annual; 3 = monthly; 5 = daily or hourly |
| Current alternative pain | How bad are existing solutions? | 1 = good alternatives exist; 3 = workable but flawed; 5 = no real alternative |
| Willingness to pay (WTP) | How much will the customer plausibly pay to make the pain go away? | 1 = unwilling to pay anything; 3 = some discretionary spend; 5 = budget already exists in the procurement cycle |
The composite score is the product of the four dimensions, on a scale of 1–625. A typical Week-1 brainstorm produces ideas scoring 15–80; ideas scoring above 100 are usually worth pre-validating; ideas scoring above 200 are usually worth taking to top-3 candidacy.
The product (rather than sum) matters because all four dimensions must clear a low bar. An idea with severity 5 and WTP 1 is not a startup — it is charity. An idea with WTP 5 and frequency 1 is not a startup either — annual purchases produce sales cycles too long for student timelines. The product penalises any single weak dimension, which is the right behaviour at this stage.
Three caveats are worth noting before you use the rubric.
First, the rubric is calibration-dependent. Scoring is consistent within a team but inconsistent across teams; do not compare your scores against another group’s. Score the same idea twice, on different days, before committing to the rank order — the dispersion you observe in your own scoring is the noise floor of the instrument.
Second, the rubric is inadequate for two-sided markets. A platform’s pain score must be computed on both sides separately and the geometric mean taken — and the cold-start problem is not captured at all. If your idea is a marketplace, a community, or any product where value depends on multi-sided participation, append a separate cold-start analysis to the rubric (§19.4.2).
Third, the rubric is a triage tool, not a validation tool. It captures your hypothesised assessment of customer pain, not the customer’s actual pain. The pre-validation calls in §19.2.3 are what move you from hypothesised to evidenced.
19.1.4 Founder-market fit
Founder-market fit asks: why is this team uniquely positioned to build this product? It is the single most important question you will be asked by the mock VC panel in Week 10, and the question that should most-strongly shape your Week 1 idea selection.
There are three durable flavours of founder-market fit for AI startups:
- Domain expert + technical builder. One founder has years of operational experience in the target industry; another founder can build the system. Examples: Harvey AI (Weinberg-Pereyra), Tractable (Razavi-Galego); domain access is what matters here. The risk: if the technical co-founder lacks ML depth, the team builds a slow brittle system that the domain expert eventually loses faith in.
- Technical + commercial builder. One founder has deep ML expertise; another has commercial-go-to-market experience. Examples: Anthropic (Dario and Daniela Amodei plus the early team), Sierra (Bret Taylor and Clay Bavor). The risk: without domain depth, the team builds technically excellent products that solve the wrong problem.
- Personal-pain founders. Multiple founders share a deep, daily experience of the problem they are solving. Examples: Cursor (every founder was a heavy IDE user), Canva (Mel Perkins was teaching design daily). The risk: founders’ confidence in their own experience can outrun the breadth of the market opportunity, producing products that are perfect for the founders and irrelevant to the broader user base.
For student teams, founder-market fit is more important than for venture-backed teams, because student teams have only 10 weeks and cannot easily acquire domain knowledge during the build. If your team has no founder with substantive experience of the problem domain, you should narrow your idea selection to domains where one of you has lived experience, or to domains that are universal enough that everyone has lived experience (consumer productivity, student tools, family-coordination apps).
This is not the last consideration in idea selection — but it is decisive in tie-breaks. Two ideas with similar customer pain scores should be discriminated by founder-market fit; the idea where the team has stronger fit wins, even if its raw score is somewhat lower.
19.1.5 Team formation and the founder agreement
Most student startups fail not because of the idea but because of the team. The most common failure modes are:
- The free-rider. One founder contributes much less than the others but holds equal equity. Resentment compounds week-by-week.
- The unilateral pivot. One founder unilaterally changes the product direction without team agreement. Trust collapses.
- The unspoken exit. A founder loses interest and stops engaging without formally leaving. The team carries dead equity into the funding round.
- The IP fight. A founder leaves the team and claims that their pre-team work is the basis of the product. Without a written assignment, the claim is plausible.
- The cross-campus communication breakdown. KL and Melbourne members fall out of sync; meetings drift to one campus’s working hours; the other campus disengages.
A founder agreement signed in Week 1 substantially reduces all five failure modes. The agreement does not need to be legally enforceable to be useful — it functions primarily as a written record of the conversation, which makes future disputes about expectations and contributions tractable rather than acrimonious.
The minimum content of a founder agreement is six things:
- Equity split — how is equity divided among founders?
- Vesting — under what conditions does each founder fully earn their equity? The standard is 4-year vesting with a 1-year cliff. For student startups, 2-year vesting with a 6-month cliff aligned to the academic year is more practical.
- Roles and decision rights — who decides what? CEO decides product direction; CTO decides technical architecture; CMO decides go-to-market; major decisions (raising capital, exiting, accepting acquisition offers) require unanimous or super-majority agreement.
- Time commitment — what is each founder’s expected contribution in hours per week? Differential time commitment usually justifies differential equity.
- IP assignment — work done by each founder for the startup is assigned to the startup. This protects against the IP-fight failure mode.
- Exit and dilution — what happens if a founder leaves? What happens when new shareholders join (e.g., advisors, accelerator equity, future funding)?
The cross-campus question (KL + Melbourne) adds two extra clauses: a meeting cadence (typically twice-weekly synchronous meetings, with both campuses’ working hours respected on alternation), and a clear primary-time-zone for production decisions to avoid the failure mode where every decision waits for the other campus’s morning.
19.1.6 The Mom Test in the AI context
Rob Fitzpatrick’s The Mom Test (2013) is the canonical reference on customer interviews. The single insight that matters most: never ask people if they would buy your product. Their answer will be polite, encouraging, and useless. Instead, ask about their actual past behaviour and current pain.
For AI products specifically, three additional considerations apply:
- Anchoring. When you describe an AI product, customers tend to anchor on either the technology (“ChatGPT? I use that for emails”) or the most-publicised competitor (“oh, it’s like Copilot”). Both anchors are usually wrong for your specific product. Better to describe the workflow change you are proposing rather than the technology you are using.
- Cost extrapolation. Customers under-estimate the value of small per-query time savings until they aggregate over a quarter. A useful question: “in the last week, how many times did you do task X?” This anchors WTP on actual frequency rather than abstract attitude.
- The vendor-trust question. B2B AI products are subject to data-security and compliance scrutiny that B2C products are not. In B2B interviews, ask explicitly: “if a tool did Y for your team, what would your IT/security/legal review process look like? Have you ever bought tools that failed that review?” The answer often reveals that distribution is the real bottleneck, not technology.
The pre-validation calls in §19.2.3 use these principles. The script template is in §19.4.3.
19.2 Method — the Week 1 sprint
The Week 1 sprint runs Monday through Friday with a buffer on the weekend. Allow approximately 25–30 hours of team time across the week, distributed unevenly: heavy on Monday/Tuesday, lighter mid-week, heavy again Friday for synthesis and deliverable submission.
19.2.1 Days 1–2: the 30-idea brainstorm
By end of Tuesday, your team should have at least 30 candidate ideas in a shared spreadsheet, drawn from at least three of the following sources.
| Source | Prompt | Expected yield |
|---|---|---|
| Sector pattern reading | Skim Parts II–III; for each sector identify two under-served sub-domains | 6–10 ideas |
| Personal pain inventory | Each team member lists three things they wish existed in their daily life | 12–15 ideas |
| Adjacent-to-job | Each team member lists three workflow problems they observed in their last internship/job/family business | 12–15 ideas |
| Y Combinator request-for-startups | Read YC’s most recent RFS; identify which of their listed problems you could plausibly attack | 3–5 ideas |
| Local market scan | KL: scan Cradle’s funded portfolio + MyStartup directory; Melbourne: scan LaunchVic / Skalata / Startmate alumni | 4–6 ideas |
| The “what-if-it-cost-zero” prompt | If running an LLM cost zero, what application would suddenly become viable that is not viable today? | 5–8 ideas |
The deliberate over-generation matters. A 10-idea brainstorm produces a top-3 of mediocre ideas; a 30-idea brainstorm produces a top-3 with at least one strong candidate. Resist the temptation to filter while brainstorming. Add ideas you think are bad — the comparison sharpens the rest.
For each idea, capture in two sentences: (a) the customer (who experiences the pain), (b) the pain (what they currently do badly or expensively), and (c) the wedge (what your product would do that is meaningfully different). Two sentences is a forcing constraint — if you cannot describe the idea in two sentences, you have not yet understood it.
19.2.2 Day 3: the customer pain score
Apply the §19.1.3 rubric to all 30 ideas. Score each dimension 1–5. Compute the product. Sort the spreadsheet descending by score.
Three quality controls:
- Independent scoring. At least two team members score each idea independently before comparing. If your scores diverge by more than 2 points on any dimension, the divergence is itself information about which dimensions are most contested for that idea.
- Calibration anchors. Pick two ideas you all agree are weak and two you all agree are strong. Score the four anchors first; this calibrates the team’s interpretation of the 1–5 scale before you score the rest.
- Sanity check on the top 10. Every team has at least one idea that scored well because the team genuinely likes it but is, on reflection, hard to defend. Sanity-check the top 10 ideas against the AI-wrapper critique (§19.1.2): for each, name the moat path. Ideas where no team member can name a credible moat path drop out of the top 10.
The output of Day 3 is a ranked list of the top 10 ideas, with explicit scoring rationale, ready for pre-validation.
19.2.3 Day 4: pre-validation calls
Each team member runs at least 2 pre-validation calls, for a team total of at least 8 calls (with 4 per ranked idea). These are 5-minute calls, not 30-minute interviews. The purpose is to test whether the customer-pain hypothesis survives contact with a single real person, not to validate the idea fully.
The script (full version in §19.4.3) has four parts:
- Anchor. “I’m a Monash student looking at how people handle [X]. Could you tell me about the last time you did [X]?”
- Behaviour question. “What did you actually do? Walk me through the steps.”
- Pain question. “What was annoying or expensive about that?”
- Alternative question. “What would you do differently if you could?”
Note what is not in the script: any description of your product, any mention of AI, any leading question. The Mom Test rule applies — let the customer talk about their behaviour, not your product.
After each call, take five minutes to write up: - One sentence describing what surprised you. - One specific quote that moved your thinking (positive or negative). - An updated assessment of whether the idea’s pain score should rise or fall.
Eight calls × 10 minutes each = 80 minutes total team time for the calling round, plus 40 minutes of post-call synthesis. Allow 2 hours of calendar time on Wednesday afternoon to do all eight calls.
19.2.4 Days 5–6: top-3 selection and the selection memo
By end of Thursday, the team should have a one-page memo that:
- Lists the top 3 ideas in rank order.
- For each, states (a) the customer, (b) the pain, (c) the wedge, (d) the moat path, (e) the founder-market fit, and (f) the most important pre-validation finding.
- Identifies the primary idea you intend to pursue and the conditions under which you would switch to ideas 2 or 3.
The memo is the artefact you submit to the course instructor and the artefact your team will refer back to in Weeks 2–3 when customer-discovery findings start to challenge initial assumptions. Take it seriously.
The most common failure of this step is idea attachment: a founder who proposed an idea is reluctant to demote it. The ranking should be done with explicit willingness to demote any team member’s pet idea, including your own. If your top-3 list is missing two of your members’ most-loved ideas, you have probably done the ranking right.
19.2.5 Days 6–7: team formation and founder agreement
By end of Friday, the team should have:
- A skills audit (§19.4.4) identifying who covers technical, business, design, and domain capability.
- A signed founder agreement covering the six elements in §19.1.5.
- An equity-split decision and a stakeholder cap table (§19.4.6).
- A meeting cadence and shared communication infrastructure (Slack/Discord/WhatsApp; shared Drive/Notion; recurring calendar events).
Equity-split conversations are notoriously awkward, especially for student teams who have not previously had financial conversations with each other. Three principles help:
- Have the conversation early. It does not get easier in Week 5 when one founder feels they are doing more work. The Week-1 conversation is where the equity politics are most flexible.
- Differential commitment justifies differential equity. A founder taking the unit full-time deserves more equity than a founder treating the unit as one of four units in their semester. This is not about who is “more important” — it is about risk capital and time.
- Use a transparent rubric. The Slicing Pie model (Moyer 2016) and the Holloway Equity Compensation guide (Holloway 2018) both provide structured frameworks. Pick one and apply it openly. The rubric is more important than the specific outcome — what matters is that everyone can see the logic.
We develop the equity-split rubric in §19.4.6 and the worked example in §19.5.
19.2.6 Friday evening: the Week 1 deliverable
Submit the Week 1 deliverable bundle by 23:59 Friday. The deliverable specification, with grading rubric, is in §19.6. Do not submit the bundle without the team having read each component together at least once — silent submission of components nobody else has read is the most common cause of a failing rubric on team formation discipline.
19.3 Lessons from the cases
Eight specific lessons from Parts I–III shape Week 1 decisions. We list each with chapter reference and operational implication.
19.3.1 Cursor — founder-market fit from being your own user (Chapter 5, AI-native disruption)
Anysphere’s founders were heavy IDE users who had been paying customers of GitHub Copilot since its 2021 beta. They built Cursor because they had specific complaints about Copilot’s UX that GitHub had not fixed: poor context awareness across a multi-file repo, awkward diff approval flow, weak handling of large refactors. Their Week-1 equivalent was a list of grievances about a tool they used daily. By the time they built the product, they were the customer.
Operational implication. When you score ideas, weight founder-market fit explicitly. An idea where one of you has been a paying customer of an inferior alternative scores higher, even at equivalent customer pain, because you have a more accurate model of the customer than you would otherwise.
19.3.2 JPMorgan COiN — narrow framing wins (Chapter 6, §6.11)
COiN reads commercial credit agreements. It does not read “any contract,” does not draft contracts, does not advise on contract strategy. The narrowness is the point. The bank could specify what success looked like, the lawyers knew what they were giving up and getting, and the workflow integration was tractable because the boundary of the system was well-defined.
Operational implication. When you describe your idea, you should be able to do it without the words “platform,” “ecosystem,” or “AI-powered.” If you cannot, your scope is too broad. A team that pitches “an AI-powered platform for small business” has not yet made the specification decisions that turn an idea into a product.
19.3.3 Watson Health — broad framing fails (Chapters 2, 7)
IBM positioned Watson Health as a general cancer-treatment recommendation system. The breadth made every deployment a research project: each cancer type, each hospital, each clinical workflow required separate adaptation. The system never accumulated the operational learnings that compound in narrower deployments. By 2022, IBM had invested an estimated USD 4–5 billion and sold the unit for ~USD 1 billion.
Operational implication. The breadth-versus-depth choice is decided in Week 1 by your idea description. Resist the seductive feeling that your idea is “more ambitious” because it covers more. Ambition is built by going deep on a narrow surface, then expanding once the workflow is mastered. Watson is the canonical reminder that ambition without specification is a category error.
19.3.4 Ant Group — proprietary data is the durable moat (Chapter 3, §3.11)
Ant Group’s 3-1-0 lending model works because Alipay’s transaction graph is proprietary data Ant alone can train on. A new entrant with the same algorithm but no graph data could not replicate the underwriting accuracy. The data is generated as a byproduct of operations — every Alipay transaction makes the next loan decision better.
Operational implication. When you score the moat path for each idea (§19.1.2), apply this test specifically to data-moat candidates: does the data accumulate as a byproduct of operations, or do we have to acquire it separately? A data moat that requires expensive separate acquisition is far weaker than a data moat that is generated by usage. For a student team, only byproduct-data moats are realistic in 10 weeks.
19.3.5 Klarna — deploying without testing the idea (Chapter 8, forthcoming)
In February 2024, Klarna announced that its AI customer-service assistant was handling two-thirds of its customer-service chats. By May 2025, the company quietly reversed course and began rehiring human agents. The mid-2025 reversal was attributed to falling NPS scores, customer complaints, and a brand-trust decline.
The deeper failure was not the technology but the deployment philosophy. Klarna treated full automation as a final state to be reached, not a hypothesis to be tested. Customer impact at scale was the validation step — and by then the brand cost was sunk.
Operational implication. Even a well-resourced public company can fail by skipping the alpha→beta progression. For a student team, you should never start by promising customers a finished product. Promise an alpha; deliver an alpha; learn what is broken; iterate. The validation discipline matters more than the build velocity.
19.3.6 The LISH counterexamples — small markets are perfectly viable (Chapter 3, §3.13)
The Laboratory for Innovation Science at Harvard documented dozens of mid-market firms that built useful AI factories at low cost. A 200-person logistics firm running the same four-component factory pattern as Amazon. A regional medical-device manufacturer using ML for quality assurance. An SME-scale insurance broker using LLMs for policy comparison.
Operational implication. A common Week-1 mistake is over-scoping the target customer to “the global enterprise market” when the realistic target is “Klang Valley logistics SMEs” or “Victorian community-health clinics.” A USD 10M-revenue startup serving a clearly-defined regional segment is a successful outcome by graduate-school-startup standards. Right-size your ambition to your timeline.
19.3.7 Carsome — regional market knowledge as moat (Malaysian unicorn case)
Carsome was founded in 2015 by Eric Cheng and Jiun Ee Teoh as a used-car marketplace operating across Malaysia, Indonesia, Thailand, and Singapore. The founders had specific knowledge of how the Malaysian and Indonesian used-car markets actually worked — including the role of dealer networks, the quality-inspection problem, and the consumer-financing constraints — that Silicon Valley founders would not have had. The company achieved unicorn valuation in 2021 and expanded across Southeast Asia by understanding regional dynamics that pan-regional or US firms could not match.
Operational implication. For KL-based teams, regional market knowledge is a real and defensible moat. An idea targeting a Malaysian or ASEAN-specific problem with KL founders has founder-market fit that a pan-regional or global team cannot match. This is particularly relevant for teams considering fintech, agritech, edtech, or SME-services ideas.
19.3.8 Canva — personal pain and the long arc (Australian unicorn case)
Mel Perkins began the work that became Canva in 2007 while teaching graphic design at the University of Western Australia. Her observation — that students struggled disproportionately with the Adobe tool chain — produced first Fusion Books (2007) and then Canva (2013). The company achieved decacorn status by 2021. The arc from initial pain observation to unicorn status was 14 years.
Operational implication. For Melbourne-based teams, the Australian startup ecosystem has multiple examples of decade-arc companies built from specific personal-pain observations. A 10-week course will not produce a unicorn, but it can produce an idea credibly worth the next 5 years of one founder’s effort. Personal-pain ideas have the longest sustainability runway because the founder’s commitment is internally generated rather than externally extracted.
19.4 Tools and templates
This section gives you the artefacts to use this week. Each is referenced from §19.2 above.
19.4.1 Idea generation worksheet
A 30-row spreadsheet with columns:
| Col | Field | Example |
|---|---|---|
| A | Idea name | “Pulse” |
| B | One-sentence description | “AI tutor for Form 5 maths students with localised Malaysian curriculum” |
| C | Source (sector / personal / adjacent / YC RFS / local / cost-zero) | “Personal — every team member tutored at school” |
| D | Customer | “Form 5 students and parents” |
| E | Pain (what they currently do badly) | “Pay RM 80–200/week for in-person tutoring with inconsistent quality” |
| F | Wedge (what your product does differently) | “Adaptive practice tied to SPM rubric, RM 50/month, available 24/7” |
| G | Notes | (anything else relevant) |
Maintain the sheet in shared Google Sheets / Notion / Airtable so all team members can edit simultaneously. Resist the temptation to filter or score during generation.
19.4.2 Customer pain score spreadsheet
Add four columns to the idea generation worksheet:
| Col | Field | Score 1–5 |
|---|---|---|
| H | Severity | |
| I | Frequency | |
| J | Current alternative pain | |
| K | Willingness to pay | |
| L | Composite (=H×I×J×K) | |
| M | Moat path (data / distribution / workflow / regulation / “wrapper”) | |
| N | Founder-market fit (1–5) |
Sort descending by Column L for the triage cut. The first two top 10 candidates plus any “wrapper” entries are dropped before pre-validation. The remaining ranked list goes into Day 4.
For two-sided platform ideas, append a second sheet with cold-start analysis: which side is harder to seed? what does week-1 look like with zero participants on either side?
19.4.3 Pre-validation call script
[ANCHOR]
Hi, I'm a Monash student studying how people handle [X].
This is a 5-minute call - I'm trying to understand your real
experience, not pitch anything. Is now still OK?
[BEHAVIOUR]
Could you tell me about the most recent time you did [X]?
Walk me through what you actually did - the steps,
the tools you used, who else was involved.
[PAIN]
What was annoying or expensive about that process?
Where did you waste time or money?
[ALTERNATIVE]
If you could change anything about how that worked,
what would you change? Have you tried other approaches?
[CLOSE]
Thanks - this was really useful. Could I follow up
in a few weeks if I have more questions?
What to record on each call (one Notion / Google Doc page per call):
- Date, name (or anonymised ID), role, sector
- One sentence: what surprised you
- One quote (verbatim) that moved your thinking
- Updated assessment: should this idea’s pain score rise or fall?
- Concrete next question: what should you ask the next interviewee about?
Aim for 5 minutes on the call, 5 minutes on the write-up. Eight calls plus eight write-ups = approximately 90 minutes of team time, well within Wednesday afternoon.
19.4.4 Skills audit and capability matrix
A 2×2 matrix of capabilities × team members. Capabilities for an AI startup MVP:
| Capability | Member 1 | Member 2 | Member 3 | Member 4 | Member 5 |
|---|---|---|---|---|---|
| Technical — frontend / no-code build | |||||
| Technical — Python / R / data science | |||||
| Technical — LLM API integration / prompt engineering | |||||
| Business — customer-discovery interviewing | |||||
| Business — pricing and unit economics | |||||
| Business — pitch and presentation | |||||
| Design — UX flow design | |||||
| Design — visual / brand | |||||
| Domain — knowledge of target market | |||||
| Operations — project management | |||||
| Legal / regulatory awareness |
Score each cell 0 (no skill), 1 (basic), 2 (capable), 3 (strong), 4 (expert). A row with all-zero scores identifies a capability gap the team must close — through self-study, peer mentoring (e.g., from another group with strength), or course resources.
The two most common gaps in student teams are customer-discovery interviewing (which feels uncomfortable until practised) and pricing and unit economics (which is rarely taught in undergraduate business courses). Plan to develop both during the semester.
19.4.5 Founder agreement template
The template covers the six elements from §19.1.5. The full template (4 pages) is provided as a separate document; the structure is:
FOUNDER AGREEMENT — [PROJECT NAME]
Effective date: [DATE]
Parties: [FULL NAMES, ROLES]
1. PURPOSE
This agreement records the parties' understanding of their
roles, equity, time commitments, and decision rights for
the duration of [unit code] and any continuation thereafter.
2. EQUITY
2.1 Initial split: [%]
2.2 Vesting: [4-year / 2-year], [1-year / 6-month] cliff
2.3 Trigger events: see §6 (Departure)
3. TIME COMMITMENT
3.1 Each founder commits a minimum of [X] hours/week
through Week 10
3.2 Differential commitment is reflected in §2.1
4. ROLES AND DECISION RIGHTS
4.1 [CEO / Product / etc] decides: ...
4.2 Major decisions (raising capital, strategic pivots,
acquisition offers, dissolution): require unanimous
agreement OR super-majority of [4 of 5]
4.3 Tied votes: [process]
5. INTELLECTUAL PROPERTY
5.1 All work done by founders for the project is assigned
to the project entity from [DATE]
5.2 Pre-existing IP (specify): ...
5.3 Background IP licence: ...
6. DEPARTURE AND DILUTION
6.1 Departing founders: forfeit unvested shares; vested
shares retained
6.2 Future shareholders (advisors, accelerators, investors):
all founders agree to proportionate dilution
7. DISPUTES
7.1 Process: 48-hour cooling period, then full team meeting,
then mentor/instructor mediation
7.2 Last resort: dissolution process per §8
8. DISSOLUTION
8.1 Triggers: ...
8.2 IP disposition: ...
8.3 Distribution of remaining assets: ...
9. CROSS-CAMPUS PROVISIONS (KL-MELBOURNE TEAMS)
9.1 Synchronous meeting cadence: [twice weekly],
alternating preferred working hours
9.2 Asynchronous communication: [Slack / Discord channel]
9.3 Production decision time-zone: [primary]
9.4 Document storage: [shared Drive / Notion]
[SIGNATURES]
The template is non-binding under both Malaysian and Australian contract law in its student form (the parties have not exchanged consideration, and the entity does not yet exist), but it functions as a written record of the conversation. When the team forms a Sdn Bhd or Pty Ltd later — see §19.4.7 — the founder agreement is a starting point for the formal shareholders’ agreement that will then be drafted by counsel.
19.4.6 Equity-split rubric and cap table
Three approaches to equity splits, in increasing order of structure:
(1) Equal-split with reservations. Default to equal splits across the team; document specific concerns (e.g., one founder’s reduced commitment) as “to be revisited in Week 5.” Best for teams with similar commitment levels and limited prior history.
(2) Holloway transparent rubric. Score each founder on five dimensions (idea generation, technical contribution expected, time commitment, risk capital, prior commitment) on a 1–5 scale; equity is proportional to the sum. Best for teams with materially different roles or commitment levels. The Holloway Equity Compensation guide (2018) is the canonical reference.
(3) Slicing Pie dynamic split. Equity is allocated proportionally to ongoing contributions (time, money, IP) rather than fixed at the start. Best for teams that expect substantial contribution differences during the build but want to avoid the equal-split-with-reservations awkwardness. The Moyer (2016) Slicing Pie book is the standard reference.
Pick one and apply it openly. Document the chosen rubric in your founder agreement (§19.4.5, Item 2).
The cap table at Week 1 is simple — typically four or five founders sharing 100% of a notional 1,000,000 shares, with a 10–15% future advisor / accelerator / employee pool reserved. The cap-table template:
| Stakeholder | Shares | Percentage | Vesting | Notes |
|---|---|---|---|---|
| Founder 1 (CEO) | 250,000 | 25.0% | 2y / 6mo cliff | Full-time |
| Founder 2 (CTO) | 250,000 | 25.0% | 2y / 6mo cliff | Full-time |
| Founder 3 (CMO) | 200,000 | 20.0% | 2y / 6mo cliff | Full-time |
| Founder 4 (Operations) | 150,000 | 15.0% | 2y / 6mo cliff | Part-time (~20h/week) |
| Founder 5 (Curriculum, Melbourne) | 150,000 | 15.0% | 2y / 6mo cliff | Part-time (~20h/week) |
| Future pool | (reserved) | (10–15% post-issue) | — | Advisors / accelerator |
| Total issued | 1,000,000 | 100.0% |
19.4.7 Legal context — KL and Melbourne
Most student teams will not register a company in Week 1; the founder agreement substitutes for a shareholders’ agreement until the team is confident the project will continue post-semester. When you do choose to register — typically in Week 7–10 after demonstrable customer traction — the relevant legal forms differ across the two campuses.
KL — Sdn Bhd registration.
A Sendirian Berhad (Sdn Bhd, “Private Limited”) is the standard form for Malaysian startups. Key facts:
- Registered with: Suruhanjaya Syarikat Malaysia (SSM, Companies Commission of Malaysia) under the Companies Act 2016.
- Cost: approximately RM 1,000–2,500 in registration fees, RM 1,000 paid-up capital, plus advisor fees if using a company secretary (~RM 200–500/month).
- Minimum requirements: at least one resident director (Malaysian or permanent resident), one company secretary, a registered office, audited accounts annually.
- Eligibility for Cradle Investment Programme (CIP) Spark / Ignite: Sdn Bhd registration is a prerequisite. CIP Spark provides up to RM 250,000 conditional grant; CIP Ignite up to RM 750,000 conditional grant. See cradle.com.my for current eligibility criteria.
- MyStartup: a Malaysia Digital initiative providing recognition and access benefits for early-stage startups; not a grant programme but useful for ecosystem visibility.
- MaGIC pivot: the Malaysian Global Innovation & Creativity Centre was restructured into MyDIGITAL in 2021; the previous MaGIC ecosystem is now operationalised through MyStartup, Cradle, and MDEC.
Melbourne — Pty Ltd registration.
A Proprietary Limited (Pty Ltd) company is the standard form for Australian startups. Key facts:
- Registered with: Australian Securities and Investments Commission (ASIC) under the Corporations Act 2001.
- Cost: AUD 597 ASIC registration fee (2026), plus advisor fees if using an accountant or lawyer (~AUD 500–1,500 setup).
- Minimum requirements: at least one Australian-resident director, an Australian Company Number (ACN), a registered office, ASIC annual review fee.
- R&D Tax Incentive: up to 43.5% refundable tax offset on eligible R&D expenditure for companies with annual aggregated turnover under AUD 20M. Highly material for software startups; consult an R&D tax adviser before lodging.
- Early Stage Innovation Company (ESIC) status: provides 20% non-refundable tax offset to investors investing in qualifying early-stage innovation companies. Eligibility requires meeting principles-based or 100-point innovation tests.
- LaunchVic, Skalata, Startmate: state and accelerator-based programmes for Victorian startups; LaunchVic provides funding for ecosystem programmes, Skalata is a Melbourne-based seed-stage accelerator, Startmate is a national accelerator with a Melbourne presence.
For cross-campus KL-Melbourne teams, a typical structure post-semester (if the team continues) is to register the operating entity in one jurisdiction and have the other-jurisdiction founders hold equity through that entity. Registering in both jurisdictions creates double-compliance overhead that is rarely worth it pre-Series-A.
This week, you do not need to make any of these decisions. You only need to know what the options are, so the founder agreement does not over-specify. Document in §6 of the founder agreement that the entity-formation question will be revisited in Week 7.
19.5 Worked example — Team Aroma chooses Pulse
Team Aroma is a composite team formed in Week 1 of MGW5701 AI in Business at Monash. The team has five members:
- Aliyah (Monash KL, Bachelor of Business / Computer Science double degree, year 4) — has tutored secondary-school maths since high school.
- Wei Hao (Monash KL, Bachelor of Computer Science, year 4) — completed a 6-month internship at a fintech startup; comfortable with Python and basic ML.
- Sara (Monash KL, Bachelor of Design, year 3) — UX background, interned at a regional advertising agency.
- Daniel (Monash Melbourne, Master of Business Information Systems, year 1) — five years prior work experience at an Australian edtech firm.
- Priya (Monash Melbourne, Bachelor of Commerce / Computer Science, year 3) — content background, interned with a curriculum-design organisation.
Their first meeting is on Monday at 10am KL time / 1pm Melbourne time — the first cross-campus call. They work through §19.2 in order.
Days 1–2: brainstorm
By Tuesday evening they have 32 ideas in the spreadsheet. Sources are roughly evenly distributed: 7 from sector pattern reading (sourced from skim of Parts II–III), 12 from personal pain (each member contributed 2–3), 9 from adjacent-to-job (Daniel’s prior edtech experience, Wei Hao’s fintech internship, Aliyah’s tutoring), 4 from local market scans.
Their list spans fintech (small-business invoicing automation, alternative-data SME credit-scoring, micro-investment for university students), healthtech (Mandarin/BM-language symptom checker, mental-health support chat for university students), edtech (5 separate ideas — including Pulse, a SPM revision platform, an essay-feedback tool, a maths-tutor matchmaker, and a parent-teacher communication app), agritech (palm-oil disease detection, smallholder farm-management app), legaltech (Malaysian commercial-litigation precedent search), and consumer (a “what to cook tonight” planner, a wedding-vendor marketplace).
Day 3: scoring
The team scores all 32 ideas independently. Two members score each idea, then they meet to reconcile.
The top 10 by composite score:
| Rank | Idea | Sev | Freq | Alt | WTP | Composite |
|---|---|---|---|---|---|---|
| 1 | Pulse — SPM revision platform | 4 | 4 | 3 | 4 | 192 |
| 2 | Mandarin/BM symptom checker | 5 | 2 | 4 | 3 | 120 |
| 3 | Mental health chat for uni students | 5 | 4 | 3 | 2 | 120 |
| 4 | SME alternative-data credit | 4 | 3 | 3 | 3 | 108 |
| 5 | Essay-feedback tool | 3 | 4 | 3 | 3 | 108 |
| 6 | Maths-tutor matchmaker | 4 | 3 | 3 | 3 | 108 |
| 7 | Smallholder farm-management | 4 | 5 | 3 | 2 | 120 |
| 8 | Parent-teacher comms app | 3 | 5 | 4 | 2 | 120 |
| 9 | Palm-oil disease detection | 5 | 3 | 3 | 2 | 90 |
| 10 | “What to cook tonight” | 2 | 5 | 3 | 3 | 90 |
The mental-health chat idea drops out of the top 10 because the team cannot identify a credible moat path — every member of the team agrees they would build a “wrapper” against the publicly-available LLMs. Smallholder farm-management drops out because of the WTP-2 (smallholders are price-sensitive and the team has no agritech distribution). They go to pre-validation with the remaining eight.
Day 4: pre-validation
The team distributes calls: each member runs two. They schedule on Wednesday afternoon. The calls produce surprising findings:
- Pulse: 4 calls (3 with Form-5 parents, 1 with a Form-5 student). Strong response. One mother said: “I pay RM 350/week for two-on-one tutoring at Mr Wong’s centre. The teacher is good but my daughter only goes Saturday. I would pay RM 80–100/month for something she can use any time.” Another said: “We tried Snapask, it was OK but the questions weren’t aligned with the SPM rubric.” The composite score rises to ~250 after the calls.
- Symptom checker: 4 calls (with two parents, one elderly relative, one nursing student). Lukewarm. Pain is real but trust in AI for medical decisions is low. WTP estimated at RM 0–10/month. Score falls.
- Essay-feedback tool: 3 calls (with university students). Pain real, frequency high. But the calls reveal that ChatGPT already serves this need adequately for free. Score falls.
- Maths-tutor matchmaker: 3 calls (parents). The pain is real, but the calls reveal that this is a marketplace problem (cold-start: how do you bootstrap both sides?). Pain score holds, but the moat-path analysis flags the cold-start as a 10-week-incompatible challenge.
- SME alternative-data credit: 2 calls (small-business owners). Pain real, but the team has no banking-licence path; this is a “wrapper” without distribution. Drops out.
By Thursday morning the team has a clear top-3: Pulse, parent-teacher comms, palm-oil disease detection. Pulse is the run-away leader.
Day 5: top-3 memo
The memo:
Top 3 ideas — Team Aroma (KL-Melbourne)
Rank 1 — Pulse (primary). Customer: parents of Malaysian Form 5 students preparing for SPM. Pain: parents pay RM 200–500/week for in-person tutoring of variable quality, with timing constrained to weekend hours; existing online alternatives (Snapask, Pandai) are not aligned with the SPM rubric specifically. Wedge: AI-tutored adaptive practice tied directly to the SPM examination structure, RM 50–80/month, available 24/7 in BM, English, and (later) Mandarin. Moat path: workflow integration with the SPM exam structure plus proprietary student-performance data accumulated as students use the platform. Founder-market fit: 3 of 5 members tutored at school; Aliyah was a top-tier SPM scorer in maths. Pre-validation: 4 of 4 calls confirmed pain at WTP RM 50–100/month; one parent stated explicit intent to pay if the product existed.
Rank 2 — Parent-teacher comms app. Customer: parents and teachers in Malaysian primary/secondary schools. Pain: communication is fragmented across WhatsApp groups (school-wide), Telegram (some classes), Google Classroom (academic), with no integration. Wedge: unified inbox with AI translation (BM ↔︎ English ↔︎ Mandarin) and sentiment-aware urgency triage. Moat path: workflow integration with school MIS systems. Founder-market fit: 2 of 5 members have parent-teacher exposure; weaker than Pulse. Concern: cold-start (need both schools and parents).
Rank 3 — Palm-oil disease detection. Customer: smallholder palm-oil farmers in Malaysia and Indonesia. Pain: yield loss from late-detected Ganoderma fungal infection. Wedge: smartphone-based image classifier with localised treatment guidance. Moat path: data accumulation. Founder-market fit: weakest of the three; team has no agritech background. WTP concern: smallholders are highly price-sensitive.
Decision. We commit to Pulse as the primary idea. Switch conditions: if Week 2’s broader customer discovery (n ≥ 20 interviews) reveals (a) WTP below RM 30/month, or (b) existing alternatives have meaningfully closed the SPM-alignment gap, we re-evaluate against Rank 2.
Days 6–7: team formation
The skills audit reveals a near-complete team: technical (Wei Hao, Aliyah), business (Daniel, Aliyah), design (Sara), domain (Aliyah on tutoring, Daniel on edtech, Priya on curriculum). The notable gap is pricing and unit economics, which the team plans to develop via the course materials in Week 8.
The equity conversation takes place over two evenings. The team uses the Holloway transparent rubric, scoring each member on time commitment (full-time = 5, ~20 hours/week = 3, ~10 hours/week = 2), prior contribution (ideation, prior research), and risk capital (prior earnings forgone, parental support, etc.).
The settled split: Aliyah 25% (CEO; full-time; idea originator), Wei Hao 25% (CTO; full-time), Sara 20% (Head of Design; ~25 hours/week), Daniel 15% (Head of Curriculum, Melbourne; ~20 hours/week; partly responsible for prior edtech-relevant content), Priya 15% (Head of Content, Melbourne; ~20 hours/week).
Vesting: 2 years, 6-month cliff, with a “completion-of-semester” milestone at the 12-week mark that releases an extra 25% of vested equity to all founders who complete the unit deliverables. The cliff matters because anyone who drops out before Week 6 loses all equity, which is a deliberate design choice to reduce the dead-equity failure mode.
The cross-campus provisions: synchronous meetings twice weekly (Tuesday 9am KL / 12pm Melbourne; Friday 4pm KL / 7pm Melbourne), with monthly alternation of who has the more inconvenient time. Asynchronous communication on a shared Slack workspace. Production decision time-zone: KL (because three founders are there) but with explicit 24-hour response windows for any decision Daniel or Priya disagree with.
Sara drafts the founder agreement on Saturday using the §19.4.5 template; the team reviews and signs on Sunday. They register a shared Notion workspace for documentation, a shared Google Drive for files, and a shared GitHub organisation for code. They submit the Week 1 deliverable bundle on Sunday at 23:30 — within the deadline but not by much, which is normal for Week 1.
What Team Aroma got right and what they almost got wrong
Three things they did well: they over-generated ideas (32 not 10); they had the equity conversation early and openly; they wrote down their switch conditions for future re-evaluation. The third matters because it lets them pivot in Week 4 without it feeling like failure — the conditions for pivot are pre-specified.
Three things they almost got wrong: they nearly skipped the pre-validation calls because the team was confident Pulse was the right answer (but the calls revealed a critical SPM-alignment insight that shaped the wedge); they almost defaulted to equal equity (which Daniel and Priya would have resented within a month given their lower commitment); they almost did not write down the cross-campus working-hours rotation (which would have produced exactly the cross-campus communication breakdown failure mode within 4 weeks).
The pattern is general. Week 1 is high-leverage because the small disciplined steps prevent large later failures.
19.6 Course exercises and Week 1 deliverable
Submit the Week 1 deliverable as a single shared folder (Google Drive / Notion workspace) with the following six artefacts. Submit the link by Friday 23:59 of Week 1.
19.6.1 Required artefacts
- Idea generation worksheet (§19.4.1). At least 30 candidate ideas, with the seven specified columns populated.
- Customer pain score spreadsheet (§19.4.2). All 30+ ideas scored on the four dimensions, sorted by composite score, with the moat-path column completed for the top 10.
- Pre-validation call notes (§19.4.3). Notes from at least 8 calls (2 per founder, minimum), with the four-element synthesis per call.
- Top-3 selection memo (§19.2.4). One page, in the format used in §19.5.
- Skills audit and capability matrix (§19.4.4). Completed grid identifying capability strengths and gaps.
- Signed founder agreement (§19.4.5). All six elements covered, all founders’ signatures captured (electronic signatures are acceptable for the Week 1 submission; wet signatures are required for any post-semester registration).
19.6.2 Grading rubric (50 points)
| Component | Points | Distinction-level criteria |
|---|---|---|
| Idea diversity | 5 | At least 30 ideas spanning 4+ sectors; clear three-source attribution |
| Score rigour | 10 | Independent scoring by ≥2 members per idea; explicit reasoning per dimension; calibration anchors used |
| Pre-validation depth | 10 | ≥8 calls; verbatim quotes captured; synthesis identifies updated assessment per call |
| Top-3 memo discipline | 10 | Customer / pain / wedge / moat / FMF / pre-validation all addressed; switch conditions explicit |
| Team formation | 10 | Skills matrix complete; founder agreement signed with all 6 elements; equity rubric applied openly |
| Founder-market fit | 5 | Articulated for the chosen idea with specific evidence (not generic claim) |
Pass: 30 points. Credit: 36 points. Distinction: 42 points. High Distinction: 47 points.
A common failure mode at this stage is to submit polished components that no team member other than the author has read. The team grade is reduced by 5 points if any team member, when asked, cannot explain the reasoning behind a component their team submitted.
19.6.3 Things to do before Monday of Week 2
By Sunday evening of Week 1, in addition to the deliverable submission:
- Schedule the Week 2 customer-discovery interview slots (you will need ≥20 interviews by the end of Week 2; book the calendar now).
- Establish the project Slack/Discord channel and add the unit instructor for visibility on team conversations.
- Read Chapter 1 of the analytical track (the introduction) and the first two sections of Chapter 20 (Week 2 — customer discovery). The cross-loading begins from Week 2.
References for this chapter
Foundational entrepreneurship
- Aulet, B. (2013). Disciplined Entrepreneurship: 24 Steps to a Successful Startup. Wiley.
- Blank, S. (2013). Why the lean start-up changes everything. Harvard Business Review 91(5): 63–72.
- Christensen, C. M., Hall, T., Dillon, K., and Duncan, D. S. (2016). Competing Against Luck: The Story of Innovation and Customer Choice. Harper Business.
- Fitzpatrick, R. (2013). The Mom Test: How to Talk to Customers and Learn If Your Business Is a Good Idea When Everyone Is Lying to You. Founder Centric.
- Ries, E. (2011). The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses. Crown Business.
- Wasserman, N. (2012). The Founder’s Dilemmas: Anticipating and Avoiding the Pitfalls That Can Sink a Startup. Princeton University Press.
Equity, vesting, and team dynamics
- Holloway (2018). Holloway Guide to Equity Compensation. Holloway. Available at: https://www.holloway.com/g/equity-compensation.
- Moyer, M. (2016). Slicing Pie: Funding Your Company Without Funds. Lake Shark Ventures.
- Hellmann, T. and Wasserman, N. (2017). The first deal: The division of founder equity in new ventures. Management Science 63(8): 2647–2666.
The AI-wrapper critique and 2024–2026 startup landscape
- Menlo Ventures (2025). 2025: The state of generative AI in the enterprise. Menlo Ventures Annual Report.
- Foundation Capital (2026). Where AI is headed in 2026. Foundation Capital Annual Outlook.
- Y Combinator (2024–2026). Requests for Startups. Available at https://www.ycombinator.com/rfs.
Cases referenced in §19.3
- Iansiti, M. and Lakhani, K. R. (2020). Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World. Harvard Business Review Press. (Watson Health, Ant Group, LISH counterexamples in Chapters 1–4.)
- McKinsey & Company (2025). The state of AI: Global survey.
- Klarna AB (2024). Klarna AI assistant handles two-thirds of customer service chats in its first month. Press release, 28 February 2024.
- Klarna AB (2025). CEO interview, May 2025; reversal of full-AI customer service strategy.
- Carsome Group Annual Reports (2021–2024).
- Canva Pty Ltd Public Statements; Mel Perkins interviews 2018–2024.
- Lamarre, E., Smaje, K., and Zemmel, R. (2023). Rewired: The McKinsey Guide to Outcompeting in the Age of Digital and AI. Wiley.
KL ecosystem references
- Cradle Fund Sdn Bhd (2026). Cradle Investment Programme: Spark and Ignite eligibility criteria. cradle.com.my.
- Suruhanjaya Syarikat Malaysia (2025). Companies Act 2016 — incorporation guidelines for Sdn Bhd. ssm.com.my.
- Malaysia Digital Economy Corporation (MDEC) (2025). MyStartup ecosystem directory.
- Bank Negara Malaysia (2024). Financial Technology Regulatory Sandbox — application guidelines.
Melbourne ecosystem references
- Australian Securities and Investments Commission (2026). Pty Ltd registration — fee schedule and requirements. asic.gov.au.
- Australian Taxation Office (2026). Research and Development Tax Incentive — guidance. ato.gov.au.
- LaunchVic (2025). Victorian startup ecosystem report.
- Skalata Ventures (2025). Seed-stage application criteria.
- Industry Innovation and Science Australia (2024). Early Stage Innovation Company (ESIC) tax offset — investor and company eligibility.
Further reading
For team formation and founder dynamics, Wasserman’s The Founder’s Dilemmas (2012) is the empirical reference; the Hellmann-Wasserman Management Science paper is the academic depth. For customer discovery, Steve Blank’s The Four Steps to the Epiphany (2005) and The Startup Owner’s Manual (Blank and Dorf, 2012) are the practitioner references; Fitzpatrick’s Mom Test is the more recent and more practical guide. For pricing and unit economics — which we develop in Chapter 26 — Madhavan Ramanujam’s Monetizing Innovation (2016) is the standard reference.
For the AI-wrapper critique specifically, the Foundation Capital and Menlo Ventures annual reports are the most-current public-record sources; the academic literature has not yet caught up to the 2024–2026 wave.
For the Malaysian startup ecosystem in particular, the Cradle Annual Report and the MDEC Digital Economy Outlook are the standard annual references; Tan and Khor (2024) Fintech Regulation in ASEAN covers the regional regulatory landscape. For the Australian startup ecosystem, the State of Australian Startup Ecosystem Report (Crossroads, annual) is the standard reference; the LaunchVic Year in Review covers the Victorian-specific landscape.
Read Chapter 1 (Introduction: the adoption-value paradox) and §20.1–§20.2 of Chapter 20 (Customer discovery) before Monday of Week 2.