Episode Transcript
[00:00:00] Speaker A: Welcome to Within WordPress, the podcast about WordPress, and full of WordPress people.
With us today is Alex. Welcome, Alex.
[00:00:15] Speaker B: Thank you for having me. Good to be here.
[00:00:18] Speaker A: Nice to having you here again.
I'm gonna say again because something went wrong with our previous recording. I think it's a funny thing because mostly because you're flexible to do it again, but it's a good, good example of recording, recording everywhere. And yet when I went to find the recording, it was no longer there.
But, yeah, happy to have you here again.
For those of you listening in, Alex is quite active in the WordPress community, but you may have not ever heard of him specifically. So, Alex, for those people who kind of have touched your world in the one way or not, please introduce yourself.
[00:01:07] Speaker B: Sure. So I'm Alex. I'm the founder of ScaleMath. We are a services and advisory business that partners with companies to operate as a extension of their ops, growth and product team.
So we've been associated with quite a lot of companies in the WordPress space, which is where I guess a lot of the confusion comes from.
But, yeah, that's a sort of quick introduction into who I am and what I've done. Some of the companies we've been involved in, just because I've been on, I think Matt Medeiros, who, you know, the show I was on with him, funnily, he actually, like, every now and then swapped out scalemath for Rank Math, which was one of our early clients, and of course, their founder helped us settle on a name. So there is that very, like, uncanny similarity.
[00:01:52] Speaker A: Are you saying Mr. Madeira's made mistakes?
[00:01:56] Speaker B: Yeah, he's still not AI? Not yet, @ least.
[00:02:00] Speaker A: No. He's certainly trying very hard. But are there more companies that you can name drop? Like, are you allowed to talk about the types of companies you work for?
[00:02:13] Speaker B: Skill Math? We don't openly. I mean, there are some that are listed on our site, so we have the pleasure of having worked with Aterim, which is also quite well known in the space, especially if you attend wordcamps.
It's very difficult to miss Vito's big personality and excitement for his product. And then we also work with other companies in the security space, including patchstack, which, again, one of our favorite companies to work with, because we get to essentially focus on the area, which is really what differentiates us from other companies in the space, which is we have deep subject matter and technical expertise in house, so we focus on the stuff that at the moment at least can't be replaced by AI, which is Conducting actual deep technical research using products, as opposed to stuff that is just summaries or trying to put together sort of skyscraper approach to putting together a best article from another combination of 10, 20 articles on a topic approach.
So long story short, that's in the SEO space you have this approach. I think it was originally coined by Brian Dean, who sold Backlinko to Semrush. And it was the idea that in order to rank, you just needed to look at the first five to 10 ranking posts for a keyword and then put together something that was, in your opinion, at least objectively, better, more comprehensive, more complete, which is a very valid approach in many ways still valid to this day.
And we do still obviously consider, okay when we're covering a topic, what has been covered before. But, but I think if you start at that level, you end up with a huge amount of similarity, which is also in part the area that is easier for AI to do because it lacks originality, it lacks any differentiation from what's already out there.
[00:04:05] Speaker A: Yeah, exactly. There's not a lot of originality, which is the thing AI is sorely missing and questionable if it will ever get there.
So that's already a few clients that are at least very high profile, very visible, very cool products for both as well.
Is there like a preference of types of companies you work with or is it just anybody working in the, in the WordPress space is interesting.
Or even outside. How, how, how does that work?
[00:04:41] Speaker B: Yeah, we, we focused on WordPress because it was what we knew starting out. But yeah, we definitely work with companies outside of WordPress. WordPress is just, I would say, arguably the most approachable community in all tech communities. So it's a lot easier to show up at a WordPress event.
And the companies that operate in this space are much more open to that. So one of the first events that I was at, we represented Aterim at that event. So we were there running the booth with Vito. The first event was actually me alone and then the second event it was myself and our team member Justin who attended to run Aterim's booth. So our partnership with companies is a little deeper to a conventional agency in that sense because we do attend both wordcamps and other events to represent the companies that we work with because we do work very closely with them.
But in terms of the specific clients we work with, some characteristics are that they are past product market fit, typically still founder led. So you have a technical founder or a non technical founder that is heavily involved in the day to day and usually at Sort of if they're planning to raise funding, they're at the seed level or have just already raised their seed round. But I personally prefer in many senses, I think bootstrapped and profitable businesses in the past have been the more exciting ones to work with. So yeah, it kind of diverse range.
[00:06:03] Speaker A: Yeah, yeah. Could we say that scale math is to be sort of seen as an extension of the marketing department or perhaps just be the marketing department. Is that, is that a fair tldr?
[00:06:16] Speaker B: It is. I've steered clear and I also jump a little bit back and forth in terms of how we talk about what we do over the years. We've very early on, before it was a trend we've veered out of. Oh, we are an SEO agency.
So the umbrella like four or five years ago we coined customer acquisition and experience and everything that fell under that umbrella was sort of our remit. But quite quickly when you say that, it's like, okay, but what exactly and how exactly do you fit into this type of company? How do you fit into this type of company? And it differs. So we, I mean some of the companies, you know, which we've worked with, they have very small in house like teams based on focus on, on growth and marketing. So then when we come in, we are essentially they're hiring me and by extension the team that I hire to work on that account comes along to service and be their marketing or growth team. And that's one of the approaches. I would say that's sort of our startup approach. We have then other approaches where we work and join more established companies which have maybe 5 to 10 in house people depending on how they split up the team.
That becomes more complicated. And there's a whole. I could go into a story of like company politics that you deal with in those senses.
But yeah, we, we don't necessarily put prioritize one over the other because I think as long as I see the ability to come in and have an impact in what we're able to do for the business, I'd say it's fair game. Obviously that varies. That's not always the case. And at our current size, we are well suited to serve a specific type of company that is at a specific stage, facing certain problems. But also as we grow and we slowly serve bigger and bigger companies, that also changes over time.
[00:07:59] Speaker A: Yep, yeah, I get that. So the answer is basically yes, comma, but it depends.
[00:08:06] Speaker B: Yes, exactly. But you avoid saying it depends because otherwise that already is a telltale sign.
[00:08:12] Speaker A: No, but I can say that, I can say that you probably won't say it in a conversation when asked directly but no, but that's an interesting position.
How long have you been in the I'm just gonna very sort of lump everything together in one bracket of yeah, I don't even know how to call the bracket other than you've been in the game of WordPress content for quite a while and with that I mean either created in or about or related to how long have you been doing this.
[00:08:55] Speaker B: So I first became a user of WordPress back in around 2012.
I first started working with and we've talked about this in the first recording but I feel like it's relevant to bring up again one of your we.
[00:09:08] Speaker A: Early nobody's listening to the first recordings.
This is the first recording now.
[00:09:15] Speaker B: Is Christian from WP Chill or the main company that we worked with or product of his that we worked on was modular. So that was around the time that I was finishing off high school.
I connected with him believe it was through indie hackers, through the forum and at that time we it was just me. So scalemath didn't exist and I was essentially under his wing helping him do what was necessary to grow that product. And then from there I enjoyed what the work the day to day of mainly SEO at the time and I was like okay, well if we can do this but for other companies in the space, perhaps ones that are a little bit more I'm more attached to although I was a hobby photographer at the time and that was Modularizer photography plugin for photographers. I wanted to focus more on stuff that was within my interest as they developed into marketing and growth. And then that is when I connected with Rank Math which again was is an SEO plugin. So there was a focus on SEO from there.
[00:10:15] Speaker A: Yeah, gotcha.
If so from 2012 roughly. But let's let's call it a good fat 10 years of really diving into it. Seriously, besides AI, what would you say is the biggest change that you've seen in the last couple of years?
[00:10:38] Speaker B: Very difficult because it is overshadowed by AI to point something I'm aware which is why I'm asking yeah I'd say the bar for quality is what I would point to and I think AI in a sense has contributed to that as well.
How it will play out with AI stands to be very different. I think you alluded to the fact that we don't know if originality will ever be fully replaced by AI because how can it be original given it is a large language model.
I do think the bar for quality from what I've seen both in and outside of the WordPress space, when I started in WordPress, the bar for quality for plugins and for themes for builders was very low compared to what it is now.
And credit also to Kevin, who we did talk about right before we started recording. I think the bar for quality.
Yes, Kevin Geary is constantly going up, which I love to see. Of course, I think that makes more WordPress a more viable choice.
And that is also in part outside of WordPress. The bars that you've seen so linear as a project management tool, Webflow and other products that compete with the WordPress have moved at a very, very rapid pace in the last.
I mean, let's focus in the last six years at least.
[00:11:49] Speaker A: Yeah, five, six years for sure. Yeah, yeah.
In terms of content, would you say the same thing?
[00:11:57] Speaker B: I'd say the same thing, yes.
My argument and my joke when I talk about AI and its impact on content, at least insofar is that the bar that I've had for content has always been slightly higher. So I'm clouded in the fact that we, we're positioning ourselves to, to be in a winning position regardless of how much AI replaces of the process. Because if it does, that means we can just pass the benefits of that. Whether it is cost in terms of not needing to hire as many people or also just time efficiency in terms of being able to do more, we can pass that down into the work that we do for the companies that we work with, which means we can be more cost efficient and we can have a higher cost to value ratio, which is ultimately what we want. A higher value.
[00:12:41] Speaker A: That makes sense.
[00:12:42] Speaker B: Ratio rather than.
But yes, the bar for content was pushed forward by AI because it's commoditized what was sort of a baseline for everyone else to people who aren't even native English speakers, which is a big step forward and at the moment at least has been for the positive. So I'm excited to see what we can continue building both as internal tooling to do better work as well as what other people build with AI.
[00:13:11] Speaker A: Yeah. Well, the reason I'm asking obviously is you and I both understand that the impact of AI has been started out quite impactful, but we're certainly reaching the point where we're hitting a next level.
And if you're writing content, that is probably the first thing people look at in terms of this is what AI can help me with.
I mean, if I, if I look at myself, there's, I think the, the creativity is what I Enter into my prompt and then I ask it to give me three examples of. For instance, a company I love working with, I've been using for 15 years, probably asked me to write a review and I'm like, I don't know where to start, man. I'm not good at that. That's.
But. And it needed to be short. I had an X amount of characters and all that. So I just, with a few words just gave the general vibe of what I thought of that company. Of particular of that product.
I type it in, I gave it a few more parameters, I hit enter and say, give me three versions of how you would write this review.
And the second one was perfect in my voice, obviously I did it in my right project. So that's learning from how I say things. And I copy and paste that and I send it off and there are, oh, thank you so much for a wonderful review. Now, it's not my own words literally, but you get it right. So that, that's getting that comfortable with AI to understand.
For it to understand you and to help you in the sort of.
Yeah, you know, like, like help me think of ideas is. Is huge and it's only getting bigger. And for you as a company, writing, writing content as well as conversion, tracking and everything that's connected to it, for sure raising the bar. But for sure, you, there's no escaping of how you would have to implement AI like that you, you could not not do it. Like in my mind, like it's close to impossible.
[00:15:30] Speaker B: So my, my view, and so far it hasn't had a massive effect for a couple of reasons. But I, yeah, I think there's no question that it's pushed the presence of AI both for us as a company, but also for in house employees and teams. There's either you get left behind and you sort of fight AI and you're one of the people that thinks we can't adopt it because we're against using it, which I think you're putting yourself in a losing position. And then the other option is that in order, you need to be able to become better in the sense that you use AI to speed up your time. But for you that might mean. And that differs per person, but for you that might mean I don't want to write that review because that's kind of low impact. And as long as it sounds like a review that came from me, it's pretty good.
And it doesn't really whether or not you're not talking about specifics that I won't be able to pull out of your brain.
So that would require you to actually write it yourself. And then the second is becoming faster, which is both doing more and also doing more in less time. So doing more in less time means also less money spent on in house teams. So if one person can produce the output of five people, brilliant.
That's a net positive for the company, what the outcome is, big picture for employees and the whole service economy. That's a completely different discussion, which I think is probably beyond the scope of, of a one hour podcast conversation.
[00:17:00] Speaker A: But yeah, I was gonna say if we open that can of worms, it's not even a can of worm, it's like a, a bucket. Like there's so many things you could say and, and just go really deep on. But no, it's more like.
So first of all, I, I, I, I acknowledge the fact that those of us who are working on the web primarily, certainly in web development, are very prone to thinking that what we see as the things around us to play with AI in this particular case is what everybody is already using and is also quite versed in it and understands the possibilities, the limits and all that.
So first off, I understand that that's the position, the privileged position that we have.
But given that you particularly work in the field where AI has the potential to be so incredibly enriching in indeed the process, the speed, the quality, the quantity, the double checks, all that sort of thing, it's almost mind bendingly large and exponentially growing, I feel.
And you know, as, as such, I, I'm quite curious in, in how you process AI. What do you think are the, you know, what is the biggest pitfall of AI using AI for content, for instance?
[00:18:36] Speaker B: The biggest pitfalls I would point to 2. So for written work, I'd say it's always been held to a slightly higher standard than video, because with written work you already don't know who wrote it, even if there's a name attached to it. So you're inherently designed to distrust what you read wherever you read it. If you read something on our site, do you believe it's actually written by me? If it's bylined to me, I won't comment on whether I've written absolutely everything that is bylined to me on our own site. But it's very common for that not to be the case. For video content, that bar of can you trust what you see?
Has always been a little bit lower because I see Remkus on the screen. So my barriers are down. I know I can trust what Remkus is saying and the product is recommending. But now we've seen, and this is the crazy pace at which this is moving. Just in the last 30 days alone, there are now models that are so accurate and so effective that they can generate AI influencers that when you're scrolling past right now, you wouldn't be able to tell the difference. And there's no telling that within the year, you wouldn't even be able to tell the difference if you look very, very closely on a video whether someone is saying the same thing. So in, in a world where this is the case, our angle and our approach and our whole.
Yeah. Our intrinsic value is that as long as we're doing work that others can't, which is largely the decision making of what work do we actually pursue and what do we do then we. We're still irreplaceable as people. So in your sense, it's like, okay, well, one day am I going to not have to show up to do every podcast interview and I can just have sort of like a bot show up and have my conversations? For me, I think there's an element of a, you want to show up to have the conversations and you want to take them in a certain direction. So as long as we're doing work that we know we have an opinion on and is inherently opinionated, we want to take in a certain direction. And you're also deciding which work to do, I. E. In your case, which people to invite to a podcast, in our case, which topics to cover, and when we cover a specific topic, how does that fit into the product, how does that fit into the overall strategy?
And also obviously dumbing it down a little bit.
Then I think the first two, I. E. Becoming better and becoming faster become valuable to us. Whereas I think it's easy to get caught up and think, well, if we're just going to have to keep becoming better because AI keeps becoming faster and better on its own, then we're just going to be expected to be able to do so much that where is the value in actually aiding the AI tools?
Yeah, I think that that's like a reality where right now you see people who think, and to some credit, it is possible to vibe code solutions and software products. I'm not of the opinion that a lot of people, particularly YC Y Combinator, have this opinion that code is now going to be written all by AI and before we know it, people are going to build their own product. So why would you use Etch the page builder that has been designed by Kevin Geary and his team. Why would you not just build your own page builder from the ground up or just prompt AI to do the work for you? I think that's overstating the end outcome because people want control and the AI tooling at the moment at least you are giving up control. And the second thing is people don't actually know what they want, which is why they need certain people to use software to give them constraints so that they can work within those constraints and be creative within that. So yeah, yeah.
[00:22:03] Speaker A: So it's funny that you bring up Vibe coding.
It's something I asked this week on On X or formerly known as Twitter, like, so where do, where do you land is what I asked on. Is the Vibe coding trend, is that hurtful or harmful or are you, you know, just super happy with what we have and just do more?
I think the most important thing there is that it's, it's, it's a wonderful way to use a different version of learning and in this particular case the AI will generate code. So if, if you're not aware, Vibe coding is essentially using AI to code your project.
Popular tool to do this is cursor. And inside cursor you can have whole bunch of rules determining what your project looks like and then you, you tell the AI in obviously as verbose as you can possibly do it, but you tell the AI what it needs to build and it does and then to various degrees of success.
But the problem with that is, and I, I think for me this, the parallel is towards AI creating content in the same way. The problem is you've not learned why it builds it the way it's building it. You've not learned how to understand the, the logic that goes beyond that what which is shown. So in other words, if you're not teaching yourself to be taught after you've done the project Vibe coding, then you're missing out on the largest benefit I think there is in terms of Vibe coding.
Because yes, it can do a whole bunch of code, really complex projects for you and you can be certainly if you mix.
This is something I really enjoy this particular element of Vibe coding. If you have, for instance, you have it in Claude, you have the integration with Cursor and Claude to have the application created and you then use Chat GPT to run and double check it. That's a wonderful mix. Right? So there's the more eyes the merrier. But the end result is you have something which you potentially don't really understand what it does. Certainly if you're New to the concept of coding.
You know, dangerous enough to understand FTP client and uploading it to a website, but not really understanding the implications of what you just did. That is the risk. Plus you know, potential maintenance, hell, potential technical debt I think is a huge one and as wonderful security.
Oh, security. There you go. Yeah, yeah, it's, it's just a bunch of stuff that I think is just getting too easy. And the same for me goes. Although the impact of it is much, much less severe is the, the creating of content because it's so easy to write about something.
Yeah, I think it, it enriches us. But you should consider yourself still a noob. And, and I think we as more experienced people on, on, on the web as a whole, certainly on the developer side, we need developers talking about it. We need skilled developers talking about the thing they're vibe coding. Like I would suggest if anything, if you are a somewhat experienced coder, light coder or a very experienced coder, the first thing you do is take an existing project, ask it to look at all the things that can make it smarter, more secure, more scalable, faster, all of those things. Just have it look and see what it does, have it produce its findings and then learn from that and question it and go back and forth before you have it implement something.
Things like that will make a better product and have you in learning mode. And I think that particular part is the thing we're missing the most people stepping into learning mode other than, you know, I'm just going to build my own app here, watch me do it. Yeah, yeah.
[00:26:35] Speaker B: So I think I 100% agree with what you've said. I think the. It's proliferated the amount of people who don't know what they're doing that think they know what they're doing, which is fun in many ways because I think it points out we've seen just last week or there was somebody who posted about the fact that they built a product, launched it and it was entirely built by AI, by them prompting and somehow getting this product to. Again, you don't know how half baked it was, but it worked. And then they got users, they had people paying them for it and then they started getting hacked. But they had obviously absolutely no clue what to do.
[00:27:12] Speaker A: I saw that post on X that.
[00:27:14] Speaker B: Was not pretty exactly. But there was just a matter of time. And I also think one other thing I'll just say with regards to the whole debate, which I also need to remind myself of, is that there's a lot of people Trying to push narratives that suit them. So YC specifically has invested in a lot of AI code editors and companions or co pilot tools for sure that fit that mold. So it is in their interest to push the narrative to enthusiast developers, newbies that don't know how to build something from scratch on their own to use those tools to do so, because they want to make their 20, $30 a month off of each person, plus of course the usage billing that they have on top of that. So I think there's. Yeah, I'm sure it will continue to get better, but as you said right now with, with development, it's easier to get to it, to improve something and then review it yourself and implement it than it is to get it to build something completely from scratch.
Which is why things like Typescript and having strongly typed code bases makes it easier, because it forces the AI less room for interpretation. Yeah, exactly. So I think with content it's obviously different, with content it comes more so from an improvement angle, it's less valuable, I would argue. I think it's more valuable from a point of like, where to start, help me restructure this.
Or if you're reviewing a lot of old, long, long pieces of content, it is great at suggesting a structure and then if you have the right tooling to do it, then you can very easily say, okay, go ahead, apply that structure.
But I think equally it comes down to are you as a publisher, as a person, publishing content or information online, going to take the time to actually proofread what you've written? So, and the best example is to use exactly the best example that I can think of, this. And I don't want to fault the next JS team for this, but their documentation was called out last week or the week before for citing specific things about using middleware for authentication as a best practice or as a good default that people could use. And of course then when there's a vulnerability in middleware that came out at some point, they point out, well actually that's never been what we've recommended. And people pointed to the documentation and say, look, well, actually you have recommended this. And this is in a world where we have AI that in theory could fact check and they have enough people on their staff that in theory could have caught this out, but they haven't. So my mentality will always be that there will be enough work to go around and hopefully AI will eventually have the tooling to be used to spot scenarios like that to point out. Well, actually, let me counter that point. Why are you mentioning this? Because we actually in our system determined that that's not the case and we shouldn't be using middleware in this way. So yeah, I'm excited right now still that there's going to be a bunch of applications that we haven't considered yet because the experience of using it, in my opinion, and I'm somewhat biased because we are building a tool on this front for internal use initially and then going to launch it to the public, which is how can we use AI to the best of our advantage? So let's say in that case of next JS's documentation citing using middleware for authentication, which is just the best example I can think of right now, that could have been caught out by a tool that scans the documentation and surfaces a recommendation that then a human reviews and considers. And I think that element, that strategic and decision making element of a human making the decisions will remain there at least for the foreseeable future, in my opinion.
Which is kind of where we sit, which is even if AI gets so good that it's literally just a bunch of people accepting suggestions, proofreading and double checking, then great, we can do far more work and hopefully be far more efficient with it at the moment. It's not really quite there yet and the tooling is still, still needs to advance further for that to be possible.
[00:31:17] Speaker A: Yeah, I'd love to know more about the thing that you're building, but I have one, one follow up question in terms of AI and particularly the research side of it.
I'm assuming you've played with ChatGPT's deep research mode.
[00:31:36] Speaker B: I have, yes. And also Groks, which I think is also increasingly improving.
[00:31:39] Speaker A: Yeah, GROK is, is moving fast, I would say.
But yeah, so the reason I'm asking is I hadn't really used it yet, just once or twice, but just for something insignificant because just I wanted to see what the output was like.
I jumped into it over the weekend with.
So let me, let me preface the question that I had correctly. I have an old car and Mercedes from 1985 Mercedes 190D that has a seat that has the slide thing to the left of it of how I can, you know, raise it or lower it. But in its lowest position I'm 193 or 6 4.
It means I'm very close to touching my head to the ceiling. Right. So I know there's brackets out there somewhere that lower the seat by default, just a different type of bracket in which the seat is then installed. And I was like Hold. What would I normally do? I would normally scour through dozens of forum posts because mind you, this particular world of the Internet is seriously unoptimized for the, for, for search engines. So they, you will typically have a lot of problems just having a whole bunch of forum posts indicate that something might be here, but who knows what. And I was like, look, I need this bracket.
Let me just do the deep research mode on ChatGPT and see what it says because it's supposed to do the deep research.
And I, I, it was a fairly simple prompt. I know there is a bracket or of, and I know there's options for me to lower the seat. This is the car that I have, particularly this model version, all that. I gave it to him and then I said, please let me know what the best solution is.
And it did its thing for 10, 15 minutes, took a really long time. And there's a lot of information it kept giving back to me. But it boiled it down to three options. I looked at them all three of them were absolutely spot on, like there's nothing to add for me. I went ahead, purchased the one that I needed and I'm done. Like, this to me is next level. This is where it becomes interesting because this is just a, an innocent topic, right? There's just, there's a bracket. I need it. Couldn't find it. Let me, let me see what the AI says basically.
What do you think of that sort of level of research capabilities entering the public domain?
[00:34:28] Speaker B: I think for the use case you've described, it's really valuable for our use case as a company. And also tangential use cases like yours that I have as a person, I think are very valuable as a company. I think the element of it that excites me is, oh, my first reaction is because of this, we're able as a team to research topics we know very little about and become experts on them and cover them with far less time without needing to hire necessarily experts on given topics.
I think right now the argument I have against that, and the reason we can't do that yet, and we don't really reap a benefit from it, is there are still checks and balances in place. If we were to work with a fintech startup, we can't exactly say that we're going to, you know, we're very transparent in how we work with companies. So if we do use a tool like this, we would pass on the benefit. We would never, you know, try to like pass it off as though we're actually having an expert in house.
And the expectation from companies right now at least, is still that there would be an expert. And if that expert, that is, you know, that, you know, let's say you're writing and it needs to be a cfa, that, that's just one example, which in my opinion, and I'm not trying to talk down on people who have done the certification to become a cfa, doesn't necessarily mean you're very different from somebody who is largely very intelligent, studied, had took a STEM degree at university and now uses deep research. But to the company, there are still checks and balances in place and they want somebody at some stage involved that has that certification or that has some experience in the industry, which I fully understand and fully respect. And I think that's also why, at least not very quickly, the human element of, okay, you didn't, in our innocent use cases, you could argue that you would even have, maybe in a year's time, you'd even have given OpenAI your credit card information to just basically order the two for you so that you can then try to install them. So you wouldn't have been against that because whatever it would have cost would probably have been relatively insignificant and you could have refunded it later. So I think that element of it is there, but when it comes to a company taking a stance on something, or a company like let's say Mercedes themselves, writing about the official approach, they wouldn't necessarily do that without having a human somewhere in the process. So, yeah, I think in terms of production and publishing, the bar should be higher than what I think most people are trying to dumb it down to for the reasons I just described in terms of consumption and not wanting to consume everything when it comes to, I literally just want a link to the thing to order.
Yeah, it is a quicker way of getting to it, especially for the longer tail stuff.
[00:37:20] Speaker A: Yeah, interesting.
I, I can see, I can see that version and I think you're right. I think that is the difference.
I, I mentioned this particular example for deep research because I have done these types of researches as long as I've been on the Internet because I'm a car nut. I, I look up for car stuff and I modify them and I do things with them. So I need parts, I need solutions, I need this, that, so I know what the, the, the, the time consumption that that requires. I know what that's like. And it for sure is in 10, 15 minutes.
And that kind of, to me, this was one of the first times where I like, oh, okay, not only are you Listing the sources.
It basically reasons me through its thought process and then comes up with this, and then comes up with that, you might have this. And then goes on and let me double check that. And then he says, oh, I find another link, let me double check that as well.
And the end result was, look, here's three options, then lays them out and at the very bottom lists all the sources. And I go like, okay, that means I can check every single thing. So I did, and they all validated.
But the actual research part was, you know, 10, 15 minutes, whatever it was.
And that to me is like the first time I went like, oh, okay, that's, that's a change, that's something new.
I can see myself using this more often now.
That said, I will never be an expert on the level that an expert would be because you, I've mentioned this in previous podcasts as well and, but, but the, the assumption that the AI is a form of intelligence is something we need to drop real fast because it's got nothing to do with intelligence on, on that level. What it is, it is really good at predicting the language.
Just take it up, take it for, for exactly what that means and nothing more.
Because it can't think of something new. It can only use whatever it has as a reference for something that looks new. But you know, in reality, isn't that.
[00:39:41] Speaker B: Like somebody who tried a lot?
I think yes, initially and more so, let's say around a year or so ago, as the discussion around AI became hotter and hotter. Yes, now less so, because I think more people, at least more people on the sort of informed side of the spectrum, like you and I, who know how the technology works under the hood, or at least have an understanding of it, realize that it can't fully replace the knowledge and experience, firsthand experience aspect. So like in your example, somebody will have likely, hopefully tried to install using one of those two or three methods that you mentioned. And there are advantages and disadvantages to each of them.
[00:40:25] Speaker A: There are.
[00:40:26] Speaker B: Now the AI language model has not actually gone and installed it and can't, at least not yet isn't connected to a robot that can say, oh, when installing the first one, I realized that actually you have to damage this part of the floorboard and therefore I wouldn't recommend this one over this one, for example.
And I think that's what you benefit from, still having a human involved. And eventually one could argue in a more dystopian world that there will be, it will just be like, yes, I'm actually going to go and Try it to see if it works in certain areas of work at least which I can see happening at least in our lifetime. Not in the next two, three years.
[00:41:06] Speaker A: Though I think at one point it will go to YouTube and look for there as well. Process the video that describes this particular problem, processes it back, analyzes the imagery, all of that. I mean it's going to be just become more and more encompassing, but it's still not just going it, it, it's not going to be producing something original and, and, and in an ever. In an ever interesting topic. And it, I think the, the vast majority of podcasts I've had had in some way always connected AI because it's becoming such an integral part of building, writing, creating stuff on the Internet.
When I go back to something you said offhandedly the you're building a tool.
Do tell.
[00:42:04] Speaker B: Absolutely. So we've started building it. You've heard about it from me firsthand.
I think it must be over a year ago now. So Workover is the name of the product. It is the first product we own fully and it is one we started building for internal use at scalemath to improve our team's workflow and also by extension, some of the companies that we work with also got access to it.
The initial focus was super basic and simple in the sense that we just wanted an easy way to connect where we did our work to WordPress, the content management system where we primarily publish with the plans to support other content management systems as we build that original tool. Because I would say that's not really a fully fledged product and solved our own seemingly small, yet also very time consuming annoyance when you work on a relatively large scale.
We started having conversations with customers on our advisory and services side, but also people who fit the ICP for both the simple version of Workover in the sense of syncing from Google Docs to WordPress, as well as what it could become if we were to keep shipping. And this was also around the time that AI was improving at a rate where I was like, okay, this is going to be something that we can build something truly valuable with, at least at some point, and that matched with our own internal needs over time led to the vision that we had for Workover growing to a point where it went beyond that of an internal tool, for starters, that we were never really planning to offer out to anybody else, but also beyond just syncing from Google Docs to WordPress. So the things that we talked about, like why is it that there was a lingering mention of a method to use middleware for auth that Vercel and the Next JS team don't recommend in their docs for however long and how many people have read that and actually implemented authentication in a way that is not recommended and not secure.
That in my view shouldn't be possible. And that is kind of one of the use cases and things that I think we will be able to solve with workover and with AI. So combining a connection to the CMS layers, WordPress, Webflow, HubSpot, CMS, all of them a core writing experience that is where people work and connecting that to AI and essentially developing an understanding of a site and also a bunch of other stuff that remains to be worked out. And there are obviously details that I can't go into now in terms of figuring out how exactly we will determine the knowledge set that we're going to operate on and based on recommendations and how we're going to make those recommendations. But yeah, we're building a product that aims to compete in this space, which is a relatively non existent space, but is why I feel relatively confident in saying that regardless of how useful AI becomes in the content production space or growth as a whole, we are building something that allows us to be at the forefront of that. So whether it's not putting together manual reports for oh where you know, you want to be able to answer questions like where are are the majority of signups coming from for a software company? Those are questions that you should be able to answer without needing a data person in house. But the data people in house right now don't even have the way to do that themselves without spending an hour or so going through dozens of different data sources to figure out. And we're sitting at the intersection of where and what is the most valuable problem we can solve first and then just build from there.
And that's kind of our core vision for workover at the moment. It's a bit more open ended as a product, but I think that's how we have to approach building a product in this age where AI is moving so quickly. So yeah, what becomes valuable changes.
[00:45:51] Speaker A: I agree there's a bunch of stuff that we've decided for scanfully to be part of the feature set, which from the moment we thought of this is what we need to have in here to the time has come for us to start implementing this sometimes even in a couple of weeks that just becomes so much more enriched and you go like hold on, hold on, hold on. I need to think about this a little bit longer because now I can do this and that especially on the. I have a large bucket of data side. There's so many interesting things you can do in terms of analysis of and where analysis of previous instances becomes a prediction for next type of stuff. So yeah, for us, AI is also going to be.
Don't know if we're going to hit that this year already, but at some point AI will become a very integral part of what Scanfly does, which I think, yeah, it's. It's inevitable. I you mentioned the workover does or, or allows for the flow to be monitored in a way. Are we, are we talking about the creation of content and how it gets edited and then published? All of that sort of like a git versioning of. Of content and then combined with the impact of the content that sort of. Am I just trying to wrap my head around it fully?
[00:47:24] Speaker B: Absolutely. So yeah, I understand the confusion because we've also had private conversations where things have shifted quite a bit in terms of direction for the same reasons you've described. It's just things move quickly. But yeah, so there are a couple of core product areas for Workover. The main one that we're focusing on this year is Workover Docs and the offering deep two way integrations with content management systems so as not to compete on the CMS layer. So we don't want to build a cms. We don't think that that's a problem space we want to compete in.
But we do believe that there is an element of document management editor experience that we can move faster than if we were to build an integration with WordPress and just one CMS and try to build within the constraints of an existing cms. So the deep integrations of like can we create stuff in Workover directly is what's shifted from the original vision, which was basically just importing a Google Doc and pushing it to the cms.
So that's now why our direction has changed to building our own document editor and management experience in the product. Because there are certain things that we weren't able to do with Google Docs and that just became too difficult that we decided to pull out and build our own editor. So the things like fundamentally an exceptional writing experience will be at the forefront of any product like this because we are competing in an interesting layer on top of the cms. But if we expect people to use it to produce as opposed to just copy content, in which I think is fundamental to getting people to enjoy using the product as a workflow tool, is that we have it in a fundamentally, really, really good experience directly within the product that actually exceeds what content management systems can deliver because they are largely focused on the data layer. So I don't think, and obviously a lot of WordPress aficionados are going to disagree. You might also disagree, but at least for us, Google Docs has never even been a comparable experience to the WordPress block editor in terms of production. We've never thought to turn to Gutenberg blocks to production for production. We just never thought that this was an efficient and enjoyable experience to work in. And I also don't think that will change. So our element is we're going to build on top of that layer of data and then from there be able to introduce a bunch of interesting things like predefined AI checks based on things that prompts that you provide to the system. So let's say you want to capitalize scanfully in a certain way. That's the most basic example is you can make sure that when somebody is writing content in workover, that it encourages them not to write, you know, scanfully with capital S and capital F, for example.
And that's obviously the most basic example because AI continues to improve. What we can do there is.
Yeah, it goes far beyond that, obviously.
[00:50:06] Speaker A: Yeah, that's a very interesting project.
Product. Project.
I, I think I'd love to see a demo when you're, when you're willing to give one at, at a certain point in the future.
But, but for now, I want to, I want to thank you for being on the podcast and sharing your, in my opinion, expert opinion on all things AI and content.
It's been a pleasure, Alex, My pleasure.
[00:50:36] Speaker B: Thank you for having me.