Archive for the ‘SEO Training’ Category

50 Remote Jobs That Pay Over $50000 a Year: Part Two Jobs … – Medium

Photo by Domenico Loia on Unsplash

Last week, I started a new series of articles focusing on 50 work from home jobs that can pay over $50,000 a year. (Actually, many of these virtual roles can pay upwards up $100,000 a year.)

Anyway, here is part two of the series: Remote jobs #11 through 20.

11. Financial Analyst

Financial analysts can be employed in banks, insurance companies, and other types of businesses. Their job is to guide managers in decisions related to investing money to generate profits.

Typically, you need at least a bachelors degree in finance or accounting, as well as experience using financial modeling tools like Excel.

Leveling up your skills: WGU Online Finance Degree.

12. UX/UI Designer

User experience (UX) designers ensure that products make sense to users by creating a logical path the flows from one step to the next. User interface (UI) designers ensure that each page effectively communicates that path with the right visual images.

This is another field where having significant experience seems to be preferred by employers over a college degree.

Leveling up your skills: Google UX Design Certificate.

13. SEO Specialist

SEO (search engine optimization) specialists help organizations optimize their websites and digital content for higher search engine rankings by employing a wide range of tactics.

While SEO specialists can work for companies in full-time roles, many professionals opt for freelancing or consultant gigs. The most common job requirements are previous experience and positive feedback from current and past clients.

Leveling up your skills: Udemy SEO Training.

14. Copywriter

Copywriters might use their writing chops to create sales and product copy, email marketing messages, and other types of promotional messages. In addition to good writing skills, they often need to have solid SEO and keyword research experience.

Just like with other marketing roles, some companies will require (or maybe just) prefer a bachelors degree in a field

Originally posted here:
50 Remote Jobs That Pay Over $50000 a Year: Part Two Jobs ... - Medium

How Search Generative Experience works and why retrieval … – Search Engine Land

Search, as we know it, has been irrevocably changed by generative AI.

The rapid improvements in Googles Search Generative Experience (SGE) and Sundar Pichais recent proclamations about its future suggest its here to stay.

The dramatic change in how information is considered and surfaced threatens how the search channel (both paid and organic) performs and all businesses that monetize their content. This is a discussion of the nature of that threat.

While writing The Science of SEO, Ive continued to dig deep into the technology behind search. The overlap between generative AI and modern information retrieval is a circle, not a Venn diagram.

The advancements in natural language processing (NLP) that started with improving search have given us Transformer-based large language models (LLMs).LLMs have allowed us to extrapolate content in response to queries based on data from search results.

Lets talk about how it all works and where the SEO skillset evolves to account for it.

Retrieval-augmented generation (RAG) is a paradigm wherein relevant documents or data points are collected based on a query or prompt and appended as a few-shot prompt to fine-tune the response from the language model.

Its a mechanism by which a language model can be grounded in facts or learn from existing content to produce a more relevant output with a lower likelihood of hallucination.

While the market thinks Microsoft introduced this innovation with the new Bing, the Facebook AI Research team first published the concept in May 2020 in the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, presented at the NeurIPS conference.However, Neeva was the first to implement this in a major public search engine by having it power its impressive and highly specific featured snippets.

This paradigm is game-changing because, although LLMs can memorize facts, they are information-locked based on their training data.For example, ChatGPTs information has historically been limited to a September 2021 information cutoff.

The RAG model allows new information to be considered to improve the output. This is what youre doing when using the Bing Search functionality or live crawling in a ChatGPT plugin like AIPRM.

This paradigm is also the best approach to using LLMs to generate stronger content output. I expect more will follow what were doing at my agency when they generate content for their clients as the knowledge of the approach becomes more commonplace.

Imagine that you are a student who is writing a research paper. You have already read many books and articles on your topic, so you have the context to broadly discuss the subject matter, but you still need to look up some specific information to support your arguments.

You can use RAG like a research assistant: you can give it a prompt, and it will retrieve the most relevant information from its knowledge base. You can then use this information to create more specific, stylistically accurate, and less bland output. LLMs allow computers to return broad responses based on probabilities. RAG allows that response to be more precise and cite its sources.

A RAG implementation consists of three components:

To make this less abstract, think about ChatGPTs Bing implementation. When you interact with that tool, it takes your prompt, performs searches to collect documents and appends the most relevant chunks to the prompt and executes it.

All three components are typically implemented using pre-trained Transformers, a type of neural network that has been shown to be very effective for natural language processing tasks. Again, Googles Transformer innovation powers the whole new world of NLP/U/G these days. Its difficult to think of anything in the space that doesnt have the Google Brain and Research teams fingerprints on it.

The Input Encoder and Output Generator are fine-tuned on a specific task, such as question answering or summarization. The Neural Retriever is typically not fine-tuned, but it can be pre-trained on a large corpus of text and code to improve its ability to retrieve relevant documents.

RAG is typically done using documents in a vector index or knowledge graphs. In many cases, knowledge graphs (KGs) are the more effective and efficient implementation because they limit the appended data to just the facts.

The overlap between KGs and LLMs shows a symbiotic relationship that unlocks the potential of both. With many of these tools using KGs, now is a good time to start thinking about leveraging knowledge graphs as more than a novelty or something that we just provide data to Google to build.

The benefits of RAG are pretty obvious; you get better output in an automated way by extending the knowledge available to the language model. What is perhaps less obvious is what can still go wrong and why. Lets dig in:

Retrieval is the make or break moment

Look, if the retrieval part of RAG isnt on point, were in trouble. Its like sending someone out to pick up a gourmet cheesesteak from Barclay Prime, and they come back with a veggie sandwich from Subway not what you asked for.

If its bringing back the wrong documents or skipping the gold, your outputs gonna be a bit well lackluster. Its still garbage in, garbage out.

Its all about that data

This paradigms got a bit of a dependency issue and its all about the data. If youre working with a dataset thats as outdated as MySpace or just not hitting the mark, youre capping the brilliance of what this system can do.

Echo chamber alert

Dive into those retrieved documents, and you might see some dj vu. If theres overlap, the models going to sound like that one friend who tells the same story at every party.

Youll get some redundancy in your results, and since SEO is driven by copycat content, you may get poorly researched content informing your results.

Prompt length limits

A prompt can only be so long, and while you can limit the size of the chunks, it may still be like trying to fit the stage for Beyonces latest world tour into a Mini-Cooper. To date, only Anthropics Claude supports a 100,000 token context window. GPT 3.5 Turbo tops out at 16,000 tokens.

Going off-script

Even with all your Herculean retrieval efforts, that doesnt mean that the LLM is going to stick to the script. It can still hallucinate and get things wrong.

I suspect these are some reasons why Google did not move on this technology sooner, but since they finally got in the game, lets talk about it.

Get the daily newsletter search marketers rely on.

Numerous articles will tell you what SGE is from a consumer perspective, including:

For this discussion, well talk about how SGE is one of Googles implementations of RAG; Bard is the other.

(Sidebar: Bards output has gotten a lot better since launch. You should probably give it another try.)

The SGE UX is still very much in flux. As I write this, Google has made shifts to collapse the experience with Show more buttons.

Lets zero in on the three aspects of SGE that will change search behavior significantly:

Historically, search queries are limited to 32 words. Because documents were considered based on intersecting posting lists for the 2 to 5-word phrases in those terms, and the expansion of those terms,

Google did not always understand the meaning of the query. Google has indicated that SGE is much better at understanding complex queries.

The AI snapshot is a more robust form of the featured snippet with generative text and links to citations. It often takes up the entirety of the above-the-fold content area.

The follow-up questions bring the concept of context windows in ChatGPT into search. As the user moves from their initial search to subsequent follow-up searches, the consideration set of pages narrows based on the contextual relevance created by the preceding results and queries.

All of this is a departure from the standard functionality of Search. As users get used to these new elements, there is likely to be a significant shift in behavior as Google focuses on lowering the Delphic costs of Search. After all, users always wanted answers, not 10 blue links.

The market believes that Google built SGE as a reaction to Bing in early 2023. However, the Google Research team presented an implementation of RAG in their paper, "Retrieval-Augmented Language Model Pre-Training (REALM)," published in August 2020.

The paper talks about a method of using the masked language model (MLM) approach popularized by BERT to do open-book question answering using a corpus of documents with a language model.

REALM identifies full documents, finds the most relevant passages in each, and returns the single most relevant one for information extraction.

During pre-training, REALM is trained to predict masked tokens in a sentence, but it is also trained to retrieve relevant documents from a corpus and attend to these documents when making predictions. This allows REALM to learn to generate more factually accurate and informative text than traditional language models.

Googles DeepMind team then took the idea further with Retrieval-Enhanced Transformer (RETRO). RETRO is a language model that is similar to REALM, but it uses a different attention mechanism.

RETRO attends to the retrieved documents in a more hierarchical way, which allows it to better understand the context of the documents. This results in text that is more fluent and coherent than text generated by REALM.

Following RETRO, The teams developed an approach called Retrofit Attribution using Research and Revision (RARR) to help validate and implement the output of an LLM and cite sources.

RARR is a different approach to language modeling. RARR does not generate text from scratch. Instead, it retrieves a set of candidate passages from a corpus and then reranks them to select the best passage for the given task. This approach allows RARR to generate more accurate and informative text than traditional language models, but it can be more computationally expensive.

These three implementations for RAG all have different strengths and weaknesses. While whats in production is likely some combination of innovations represented in these papers and more, the idea remains that documents and knowledge graphs are searched and used with a language model to generate a response.

Based on the publicly shared information, we know that SGE uses a combination of the PaLM 2 and MuM language models with aspects of Google Search as its retriever. The implication is that Googles document index and Knowledge Vault can both be used to fine-tune the responses.

Bing got there first, but with Googles strength in Search, there is no organization as qualified to use this paradigm to surface and personalize information.

Googles mission is to organize the worlds information and make it accessible. In the long term, perhaps well look back at the 10 blue links the same way we remember MiniDiscs and two-way pagers. Search, as we know it, is likely just an intermediate step until we arrive at something much better.

ChatGPTs recent launch of multimodal features is the "Star Trek" computer that Google engineers have often indicated they want to be. Searchers have always wanted answers, not the cognitive load of reviewing and parsing through a list of options.

A recent opinion paper titled Situating Search challenges the belief, stating that users prefer to do their research and validate, and search engines have charged ahead.

So, heres what is likely to happen as a result.

As users move away from queries composed of newspeak, their queries will get longer.

As users realize that Google has a better handle on natural language, it will change how they phrase their searches. Head terms will shrink while chunky middle and long-tail queries will grow.

The 10 blue links will get fewer clicks because the AI snapshot will push the standard organic results down. The 30-45% click-through rate (CTR) for Position 1 will likely drop precipitously.

However, we currently dont have true data to indicate how the distribution will change. So, the chart below is only for illustrative purposes.

Rank tracking tools have had to render the SERPs for various features for some time. Now, these tools will need to wait more time per query.

Most SaaS products are built on platforms like Amazon Web Service (AWS), Google Cloud Platform (GCP) and Microsoft Azure, which charge for compute costs based on the time used.

While rendered results may have come back in 1-2 seconds, now it may need to wait much longer, thereby causing the costs for rank tracking to increase.

Follow-up questions will give users Choose Your Own Adventure-style search journeys. As the context window narrows, a series of hyper-relevant content will populate the journey where each individual would have otherwise yielded more vague results.

Effectively, searches become multidimensional, and the onus is on content creators to make their content fulfill multiple stages to remain in the consideration set.

In the example above, Geico would want to have content that overlaps with these branches so they remain in the context window as the user progresses through their journey.

We dont have data on how user behavior has changed in the SGE environment. If you do, please reach out (looking at you, SimilarWeb).

What we do have is some historical understanding of user behavior in search.

We know that users take an average of 14.66 seconds to choose a search result. This tells us that a user will not wait for an automatically triggered AI snapshot with a generation time of more than 14.6 seconds. Therefore, anything beyond that time range does not immediately threaten your organic search traffic because a user will just scroll down to the standard results rather than wait.

We also know that, historically, featured snippets have captured 35.1% of clicks when they are present in the SERPs.

These two data points can be used to inform a few assumptions to build a model of the threat of how much traffic could be lost from this rollout.

Lets first review the state of SGE based on available data.

Since theres no data on SGE, it would be great if someone created some. I happened to come across a dataset of roughly 91,000 queries and their SERPs within SGE.

For each of these queries, the dataset includes:

The queries are also segmented into different categories so we can get a sense of how different things perform. I dont have enough of your attention left to go through the entirety of the dataset, but here are some top-level findings.

AI snapshots now take an average of 6.08 seconds to generate

When SGE was first launched, and I started reviewing load times of the AI snapshot, it took 11 to 30 seconds for them to appear. Now I'm seeing a range of 1.8 to 17.2 seconds for load times. Automatically triggered AI snapshots load between 2.9 and 15.8 seconds.

As you can see from the chart, most load times are well below 14.6 seconds at this point. Its pretty clear that the 10 blue link traffic for the overwhelming majority of queries will be threatened.

The average varies a bit depending on the keyword category. With the Entertainment-Sports category having a much higher load time than all other categories, this may be a function of how heavy the source content for pages typically is for each given vertical.

Snapshot type distribution

While there are many variants of the experience, I have broadly segmented the snapshot types into Informational, Local, and Shopping page experiences. Within my 91,000 keyword set, the breakdown is 51.08% informational, 31.31% local, and 17.60% shopping.

60.34% of queries did not feature an AI snapshot

In parsing the page content, the dataset identifies two cases to verify whether there is a snapshot on the page. It looks for the autotriggered snapshot and the Generate button. Reviewing this data indicates that 39.66% of queries in the dataset have triggered AI snapshots.

The top 10 results are often used but not always

In the dataset Ive reviewed, Positions 1, 2, and 9 get cited the most in the AI snapshots carousel.

The AI snapshot most often uses six results out of the top 10 to build its response. However, 9.48% of the time, it does not use any of the top 10 results in the AI snapshot.

Based on my data, it rarely uses all the results from the top 10.

Highly relevant chunks often appear earlier in the carousel

Lets consider the AI snapshot for the query [bmw i8]. The query returns seven results in the carousel. Four of them are explicitly referenced in the citations.

Clicking on a result in the carousel often takes you to one of the fraggles (the term for passage ranking links that the brilliant Cindy Krum coined) that drop you on a specific sentence or paragraph.

The implication is that these are the paragraphs or sentences that inform the AI snapshot.

Naturally, our next step is to try to get a sense of how these results are ranked because they are not presented in the same order as the URLs cited in the copy.

I assume that this ranking is more about relevance than anything else.

To test this hypothesis, I vectorized the paragraphs using the Universal Sentence Encoder and compared them to the vectorized query to see if the descending order holds up.

Id expect the paragraph with the highest similarity score would be the first one in the carousel.

The results are not quite what I expected. Perhaps there may be some query expansion at play where the query Im comparing is not the same as what Google might be comparing.

More here:
How Search Generative Experience works and why retrieval ... - Search Engine Land

ONE: Radzuan responds to Stamp rematch talk, impressed by title win – South China Morning Post

Jihin Radzuan was very impressed by Stamp Fairtexs latest ONE Championship title win but insists she it not thinking about a rematch yet.

Thailands Stamp (11-2), who previously held ONEs atomweight kickboxing and Muay Thai titles, became the promotions undisputed atomweight MMA champion with a TKO victory over South Korean veteran Ham Seo-hee at ONE Fight Night 14 late last month in Singapore.

Malaysias Radzuan (9-3) lost a decision to the new champion almost a year earlier to the day, but was called upon to serve as one of the Thais chief training partners ahead of the title fight, and was glad to see their efforts in the gym pay off.

I was expecting this fight is going to be brutal, Radzuan, 25, told the Post this week from Pattaya, where she has been working with Stamp at Fairtex Training Centre. [I expected] some striking exchanges and some takedown defence and everything.

Stamp did very impressive work.

Radzuan has been doing her best Ham impressions at Fairtex since June.

If Stamps performance against the South Korean was any indication, inviting Radzuan to the gym was a good decision.

Radzuan, of course, has also benefited from the arrangement.

Working with her, its a great experience that I can improve my striking, the Malaysian said. Then for the grappling, you can see I did some work with her.

Radzuan flaunted the upgrades to her game one day before Stamp defeated Ham, defeating Filipino mainstay Jenelyn Olsim via third-round armbar.

It was the Malaysians first fight since she was defeated by Stamp a year earlier, and she was glad to move past the loss to her new training partner.

Its good to be back in the fighting scene, she said. Ive been away for almost a year. That fight, it really put me back on the fighting map.

I was scared that I might freeze or something, but I did quite well.

The win over Olsim cemented Radzuans position as ONEs No 5-ranked atomweight MMA contender, and it should set her up for another big fight in the division.

One fight that she sees as a possibility is a rematch with No 2-ranked Filipino Denice Zamboanga, whom she lost a decision to in 2019.

ONE Championship rebooks Tawanchai vs Superbon

She is also willing to welcome No 3 contender Alyona Rassohyna back to the Circle though it does not seem to be her first choice.

Ukraines Rassohyna has not fought since a decision loss to Stamp in 2021, having taken a hiatus to give birth to her second daughter, but is aiming to return to action soon.

You know my style, Ill never deny any fight, Radzuan said when asked about a Rassohyna match-up. I just accept the offer, as long as theres nothing blocking me from getting the fight.

The thing is, shes been away for like two years. I dont think that she deserves the ranking. I think we should give to someone who fought recently, maybe Itsuki [Hirata] or someone, but well see how it goes.

One way or the other, if Radzuan continues to win, and Stamp maintains control of the title, it is possible the two training partners will asked to fight again with a belt on the line.

Radzuan, who is determined to become a champion herself one day, would not refuse that opportunity, but would prefer not to think about fighting her friend again until it is necessary.

Like everyone in the weight division, my goal is to be the champion, she said. And when you got into the ring or the cage, its a fight, its our work. Of course, Im not saying that right now I want to challenge Stamp.

I will climb myself. Im still friendly with [Stamp]. I love what shes doing. I really look up to her, and its not my style to say, Shes got a belt right now and I want to challenge her. For now, I want to prove myself more, and then well see from there.

Read more from the original source:
ONE: Radzuan responds to Stamp rematch talk, impressed by title win - South China Morning Post

California Law Limits Bitcoin ATM Transactions to $1,000 to Thwart … – Slashdot

One 80-year-old retired teacher in Los Angeles lost $69,000 in bitcoin to scammers. And 46,000 people lost over $1 billion to crypto scams since 2021 (according to America's Federal Trade Commission).

Now the Los Angeles Times reports California's new moves against scammers using bitcoin ATMs, with a bill one representative says "is about ensuring that people who have been frauded in our communities don't continue to watch our state step aside when we know that these are real problems that are happening." Starting in January, California will limit cryptocurrency ATM transactions to $1,000 per day per person under Senate Bill 401, which Gov. Gavin Newsom signed into law. Some bitcoin ATM machines advertise limits as high as $50,000... Victims of bitcoin ATM scams say limiting the transactions will give people more time to figure out they're being tricked and prevent them from using large amounts of cash to buy cryptocurrency.

But crypto ATM operators say the new laws will harm their industry and the small businesses they pay to rent space for the machines. There are more than 3,200 bitcoin ATMs in California, according to Coin ATM Radar, a site that tracks the machines' locations. "This bill fails to adequately address how to crack down on fraud, and instead takes a punitive path focused on a specific technology that will shudder the industry and hurt consumers, while doing nothing to stop bad actors," said Charles Belle, executive director of the Blockchain Advocacy Coalition...

Law enforcement has cracked down on unlicensed crypto ATMs, but it can be tough for consumers to tell how serious the industry is about addressing the concerns. In 2020, a Yorba Linda man pleaded guilty to charges of operating unlicensed bitcoin ATMs and failing to maintain an anti-money-laundering program even though he knew criminals were using the funds. The illegal business, known as Herocoin, allowed people to buy and sell bitcoin in transactions of up to $25,000 and charged a fee of up to 25%. So there's also provisions in the law against exorbitant fees: The new law also bars bitcoin ATM operators from collecting fees higher than $5 or 15% of the transaction, whichever is greater, starting in 2025. Legislative staff members visited a crypto kiosk in Sacramento and found markups as high as 33% on some digital assets when they compared the prices at which cryptocurrency is bought and sold. Typically, a crypto ATM charges fees between 12% and 25% over the value of the digital asset, according to a legislative analysis...

Another law would by July 2025 require digital financial asset businesses to obtain a license from the California Department of Financial Protection and Innovation.

Original post:
California Law Limits Bitcoin ATM Transactions to $1,000 to Thwart ... - Slashdot

Tech CEO Sentenced To 5 Years in IP Address Scheme – Slashdot

Amir Golestan, the 40-year-old CEO of the Charleston, S.C. based technology company Micfo, has been sentenced to five years in prison for wire fraud. From a report: Golestan's sentencing comes nearly two years after he pleaded guilty to using an elaborate network of phony companies to secure more than 735,000 Internet Protocol (IP) addresses from the American Registry for Internet Numbers (ARIN), the nonprofit which oversees IP addresses assigned to entities in the U.S., Canada, and parts of the Caribbean.

In 2018, ARIN sued Golestan and Micfo, alleging they had obtained hundreds of thousands of IP addresses under false pretenses. ARIN and Micfo settled that dispute in arbitration, with Micfo returning most of the addresses that it hadn't already sold. ARIN's civil case caught the attention of federal prosecutors in South Carolina, who in May 2019 filed criminal wire fraud charges against Golestan, alleging he'd orchestrated a network of shell companies and fake identities to prevent ARIN from knowing the addresses were all going to the same buyer.

See the article here:
Tech CEO Sentenced To 5 Years in IP Address Scheme - Slashdot