Thoughts on working together, and supporting organizations
I believe in servant-leadership, and have ascribed to this since early in my career.
Most of my work has been with startups. Bootstrap to Seed. Seed to Series A. A to B. B to flip. I also have worked with two global enterprises.
I’ve started 5 companies, and led a raised capital. I’ve been an advisor to several startups and stable small-businesses in a paid capacity. I’ve scaled companies from the 10s to the 60s, and past 150. I’ve taken a pivotal role in engineering organizations over 200 headcount. I have ranged from raising capital as a startup founder, running my own biz dev and sales as a consultant, to working at large corporations and leading technology teams.
How do you face and approach a problem?
I have extensive experience owning, introducing, guiding, and measuring, agile whole-company planning, release, and LOB operations. I have routinely advised and audited contracts, and I’ve developed IP strategies and filed patents. I’ve nearly lead HR operations, talent recruiting, and internal training. I’ve been CISO or similar, and initiated and operated the security diligence and practice. I have a long managing and building infrastructure. I understand ML in depth and have built cognitive products. My training and background in HCI (UX research) focusing on data representation and modeling, and related measures.
With some qualifiers, as many as 10 years experience. This comes from being the first and primary engineering leader in several startup operations, and being active in driving the strategic dialogue around the needs, operations, and execution of the health of the business. So my qualifier regards how large of scale of company I’ve been shouldered with the CTO role.
4 formally as that title, though the past 8 years have served that role, or higher. Having started 4 companies and led a capital raise once, I have as many as 10 yr total experience with practices attention related to the VP of Engineering role.
I think the core of this is to form a deep understanding of individuals, really listen to them, work with their strengths, assess needs, timelines to close gaps. I am very mindful in how I apply these to successfully to develop diverse high-performing teams. I have cultivated lifelong-learning environments, that maintain a collegiate energy for people in all career stages.
At Medidata Solutions, I lead a cornerstone reinvention of the app patterns and agile practices for products within our vertical. This was made a template for other teams in the US and UK. From that, I became a lead role for 2 guilds (cross-team practice groups), participation in a third, and a central roll in the development and roll-out of the new corporate intranet for all 1300 staff.
I developed assessment models for promoting growth (individual and group agility) among technology teams and talent. I disbelieve in weaponizing performance data — people can be motivated to grow in much more humanized ways. Data is a mirror from which we can ask questions and explore the best way to help someone in the growth they want most.
I have excelled in roles that combine highly technical platform and products (ML, marketplaces, etc), a passion for business success (strategy, pitch, brand), with a passion for whole-company quality (ops, tooling, and BI). Finding the sweet-spots for implementation and execution is both and art and a science, and one that I do with some enthusiasm.
Yes, extensive. I have worked with Ruby and Rails since ~2006, and consider, myself to be a rubyist. I’ve built and released public gems, created unix-idiomatic gems, maintained private gemservers, maintained modifications to rails core, and developed in non-rails ruby environments (vertx.io, grape APIs, etc). Fine grained caching strategies, from where to memoize, up to varnish and ESIs. I’m a recent fan of the Trailblazer pattern. I’ve developed API products, platform products, and advanced SEO/SEM content publication and marketing tech, using a blend of rails and modern PWA practices. I’ve leveraged instrumentation patterns using exception handlers and runtime inspection. Probably there are other things to mention, but I felt a stream-of-consciousness would better reinforce the scope in this case.
I have written extensively on scaling agile practices, here.
I am innovation-driven, but I have a strong grasp of costs, delivery time, and life-cycle. I have lead all manner of engineering projects, and own the product and HCI (design-science) knowledge to own the delivery of public products and executive analytics panels. I learned from Dr. Randy Pausch and his acclaimed CMU usability lab: Stage 3. Further experience from ASU, in Arts, Media and Engineering with cognitive and perceptual psychologist Dr. Mike McBeath.
Along with that comes experience with most areas of the build-vs-buy decision, with fullstack, mobile, data science, BI, platforms, networks, SaaS, workflow, security, infrastructure, and more.
I have built internal tools and workflows, for sales, marketing, and strategy teams, that optimize time spent and lead their efforts toward business science.
I have lead strategy and platform product definition in B2B2C environments, and the modernization of numerous legacy internal CMS, CRM, and compliance toolchains. This includes experience defining projects, collecting requirements, writing detailed functional and test specifications, coordinating efforts to scope, schedule and deploy new features sets, thinking beyond the bounds of the scenario at hand, and creating and environment where everyone can share these same ideas.
This most commonly happens where business and sales teams over-promise a special partner in order to frame a growth in business that can stepwise move toward securing capital leveraged on a large next-valuation — and then product engineering teams are not fully capable of delivering because the over-promise was a true over-promise.
It’s best to rein this in by establishing chartering and feedback loops as a regular liteweight practice involving a leadership group. It ensures that forward deployed staff can trust that they’ll get feedback at the speed necessary for relations.
When the problem has already been created, sometimes you can be lucky enough to solve it with staff-aug by consultants, either project based, or increasing hours with existing individuals or agencies. When this isn’t possible, find the absolute minimum spike that proves the new feature, validate with the partner, then sprint to a business-POC, feedback, adjust, then launch an MVP, and iterate from there.
Many were embedded above. The SEO and SEM measures enabled us to push many of our pages into the first 1-2 pages of search results. I learned more elaborate strategies for this at Investopedia.
With their personalization and recommendation systems, the key metrics were sizes of clusters/segmentations, and the ability to more-strategically price them. Also we were able to develop programs and offerings for smaller audiences than would have been prior efficient.
Most of the above systems had that constraint. Individual pages or widgets would be redeveloped with new tech that provided better data. Deployment would enable production ‘isolation’ and reliability. New data dashboard pages would be beta tested before migration. External tools would be leveraged before building and supporting something in house.
New product / features would be chartered and agile-planned at the business level before leaping to an engineering build. This created lean loops that minimized Engineering time until relevant.
The personalization (p13n) engine. The way that p13n was used for SEM, as well as network data management & modeling.
Another one was a ‘market business cloud’, which I developed and evolved at Bloomberg, and Medidata, and Even. This is a PaaS product that enabled partners to write and deploy code (apps, and analysics) into our N-sided marketplace. In each company, this lead to IP-development discussions with the C-suite and patent counsel.
The strategies to prepare for scaling are beyond the scope of this answer box. Avoid getting into tooling and env costs, often around typesafe languages, and leaning into smart, cache-able, systems designs — this is one reliable way to scale up with low cost and be able to plan for the optimal high-volume-business design for the engineering team. Many (not all) workarounds are fixes based on this template.
My first truly high-traffic site was in 2000-2002 (ecomm), but I didn’t work on another until 2010. Most have been high-traffic since then, and several in publishing where ever 10-20ms significantly adds up. I’ve dealt with large-data-volume businesses since 2000.
📙 Many of these are outlined in my StackOverflow timeline
MyBankTracker and Investopedia are both publishers in the space of consumer financial products. With each of these I built and lead the creation of a high performance / high traffic content assembly system that had top SEO and SEM characteristics.
In both cases, as well as Even, I designed and led the creation of a widget system for financial Product display. Display formats included free tables, calculators, questionnaires, recommenders full page content highlight cards, and a range of IAB formats. All of these were designed to work with a personalization engine that drew data from our content pages, as well as from data metrics pulled from across the range of our partner network.
All of these systems redefine the way that we did our business intelligence process, on account of being able to perform more targeted multivariate optimization on both sides of our two-sided marketplace. I was a strong ally to business planning and modeling teams, on the counter by extensive experience with business ontologies, and related industrial data standards and steering committees.
I led several impactful changes to a loan origination system (reverse syndication). This included data warehousing and big data engineering (reporting times from 16-28hrs, to 30-70minutes); a remarketing and reengagement messaging framework that worked with application state and regulatory compliance; and back office operational tooling that doubled our throughput.
I think any of my experience since 2015 was principally data-driven. MyBankTracker, Investopedia/Dotdash, Even Financial. All of these companies had data-driven business models. Online publications is principally driven by our SEM and SEO measures. The editorial calendar, and thus our engineering cycles, revolved around it, and led to numerous nuanced updates that had large business impact. Beyond publishing, data drives decisions about widget featured to add, which enable a higher-resolution development of our user personal modeling (for traffice segmentation). While at Bloomberg, bug and feature requests were prioritized by data (frequency + value), without only rare prioritization of requests for high-status clients. At Percipio, we had a more extreme single-minded focus on using data to drive decisions about how A/B tests affected revenue — to the point that tests were rarely used because revenue changes after-deployment were the simplest way to test the efficacy of incremental feature changes.
In the cases above where we segmented traffic, these models would for data-products that we sold to out partners. At MyBankTracker we also had a sizable web crawling/scraping programme, which was repackaged as a data / API subscription service. For MyBankTracker, Investopedia/Dotdash, Even Financial, our traffic was organized by the personalization engine, into API calls to our demand-side partners (financial institutions). This was an API data product that was the backbone of revenue. At Medidata, one of the 5 product tiers was a data products that collected longitudinal and operational data from across the clinical trial, and this data was the direct product for use in the clients’ analytics and planning programmes. At MyBankTracker we were developing a data product for revenue estimation of financial media websites.