SKY Labs is our public experiment log where we test hypotheses, document failures, and share learnings from early-stage experiments across the SKY ecosystem. No conclusions—just transparent observations.
SKY Labs Experiments
How We Run Public Experiments
Our transparent methodology for running and documenting public experiments across the SKY ecosystem.
View MethodologyEarly-Stage Adsense Experiments Explained
Testing Adsense layouts, placements, and formats on low-traffic SKY ecosystem sites.
View ResultsWhat Low Traffic Data Can (and Can't) Tell You
Understanding the limits and possibilities of data analysis with 100-1000 monthly visitors.
Read AnalysisExperiment Log: First 30 Days of a New Subdomain
Day-by-day documentation of what happens when launching a new TrainWithSKY subdomain.
View LogWhy Some Experiments Fail Completely
Detailed analysis of 24 failed experiments and what we learned from each.
Read AnalysisTesting Without Scale: Is It Still Useful?
The value (and limitations) of running experiments with small sample sizes.
Read AnalysisHow We Document Experiments Honestly
Our template and process for transparent experiment documentation.
View TemplateWhen to Stop an Experiment Early
Criteria and signals for stopping experiments before planned completion.
Read GuidelinesPattern Recognition in Small Datasets
Techniques for identifying meaningful patterns in limited data.
View TechniquesLessons Learned from Failed Tests
Compilation of key insights from 24 failed experiments across the SKY ecosystem.
View LessonsExperiments Across SKY Ecosystem
All experiments are conducted on real SKY platforms with actual users:
SKY ConverterTools
UX experiments, conversion rate tests, tool performance optimization
Visit PlatformTrainWithSKY
18 subdomain experiments, content strategy tests, learning engagement studies
Visit PlatformOur 5-Step Experiment Methodology
Every SKY Labs experiment follows this transparent process:
Define Clear Hypothesis
"If we change X, we expect Y to happen because Z." Specific, testable, and measurable.
Set Success Criteria
What data would support or reject our hypothesis? Defined before starting, never changed mid-experiment.
Run Controlled Test
Change one variable at a time. Control group where possible. Document all setup details.
Document Everything
Setup, timeline, unexpected events, all data points (even contradictory ones). No cherry-picking.
Share Learnings Publicly
What worked, what failed, what surprised us. No fake numbers, no revenue screenshots.
Failed Experiments Archive
24 experiments that didn't work as expected, but taught us valuable lessons:
-
Social Media Auto-Posting: Automated cross-posting reduced engagement by 42% compared to platform-native content.
-
Early Exit Popups: Popups at 5 seconds increased email list growth but decreased page engagement by 31%.
-
AI Content Detection: No consistent pattern found in distinguishing AI vs human-written content on early-stage sites.
-
Complex Navigation Menus: Advanced mega-menus decreased user engagement by 28% compared to simple navigation.
SKY Labs Principles
No Fake Numbers
We show actual data, even when it's unimpressive or contradicts popular beliefs.
Transparent Process
We document setup, methodology, and unexpected events—not just final results.
Failure Documentation
Failed experiments are often more valuable than successful ones. We share both.
Small Data Focus
We work with realistic traffic levels (100-1000 visitors/month), not millions.
Why Transparent Experiments Matter
Builds Real Trust
Sharing failures and small wins builds more credibility than only showcasing successes.
Realistic Benchmarks
Most creators have small traffic. Our experiments reflect that reality, not enterprise-scale data.
Learning Through Doing
Experiments force us to be specific about hypotheses and measurement.
Community Learning
Shared experiments help others avoid our mistakes and build on our learnings.