SEO for Job Boards: What Actually Moves the Needle
I built BirJob.com, a job aggregator for Azerbaijan. One developer, 91 scrapers, around 5,000 active listings from 50+ sources on any given day. The entire growth strategy is organic search. No ad budget. No marketing team. Just code and patience.
This post is about what I actually did for SEO, what worked, what was a waste of time, and what I would do differently. I will share real code from the codebase, not hypothetical examples. If you are building a job board or any content-heavy site, especially in a small or non-English market, some of this might save you a few months of trial and error.
1. Why SEO Matters More Than Paid Ads for a Job Board in a Small Market
Azerbaijan has roughly 10 million people. The Azerbaijani-language internet is not that big. The pool of people searching for jobs online in Baku is large enough to build a business, but not large enough to justify burning money on Google Ads at competitive CPCs.
Here is the math I did before writing a single line of SEO code: the average CPC for "vakansiyalar" (vacancies) in Azerbaijan is around 0.15-0.30 AZN. That sounds cheap until you realize the conversion from "landed on the site" to "actually clicked apply" is maybe 15-20%. There is no revenue from job seekers. Revenue comes from HR companies posting sponsored jobs. So the unit economics of paid acquisition for job seekers never made sense.
Organic search, on the other hand, compounds. One well-structured page can rank and bring in traffic for months. A job posting page that ranks for "PASHA Bank vakansiya" today will keep bringing in visitors as long as that company is hiring. And they are always hiring.
So the decision was straightforward: invest engineering time into SEO infrastructure and get compounding returns, or spend money on ads and get linear returns. For a solo developer with more time than money, this was not a close call.
2. URL Structure: Human-Readable Slugs That Carry Keywords
The first SEO decision I made was URL structure. Job boards are notorious for ugly URLs.
You have seen them: /job?id=48291&ref=search&page=3. Google can index those, but
they carry zero keyword signal, and users do not trust them in search results.
BirJob uses a slug format like /jobs/frontend-developer-pasha-bank-12345. The slug
contains the job title, the company name, and a numeric ID at the end for database lookup. Here is
the actual slugification code:
function slugify(text: string): string {
return text
.toLowerCase()
.replace(/ə/g, 'e')
.replace(/ı/g, 'i')
.replace(/ü/g, 'u')
.replace(/ö/g, 'o')
.replace(/ğ/g, 'g')
.replace(/ç/g, 'c')
.replace(/ş/g, 's')
.replace(/[^a-z0-9]+/g, '-')
.replace(/^-+|-+$/g, '')
.slice(0, 50);
}
export function slugifyJobUrl(
title: string, company: string, id: number
): string {
const t = slugify(title);
const c = slugify(company);
const parts = [t, c, String(id)].filter(Boolean);
return parts.join('-');
}
A few things worth noting:
-
Azerbaijani character transliteration. Characters like
ə, ı, ü, ö, ğ, ç, ş are replaced with their
ASCII equivalents. This keeps URLs clean and compatible with all systems while preserving
readability. A job titled "Mühasib vakansiyasi" becomes
muhasib-vakansiyasiin the slug. - 50-character limit. Long job titles like "Senior Full-Stack Developer with 5+ Years of Experience in Cloud Infrastructure" would produce unwieldy URLs. Truncating at 50 characters keeps things manageable.
- ID at the end. The numeric ID is always the last segment. This makes extraction trivial with a regex:
export function idFromSlug(slug: string): number {
const match = slug.match(/(\d+)$/);
return match ? parseInt(match[1]) : NaN;
}
This design means old bookmarked URLs like /jobs/12345 still work. The page extracts
the ID, loads the job, computes the canonical slug, and redirects:
const canonicalSlug = slugifyJobUrl(job.title, job.company, job.id);
if (params.id !== canonicalSlug) {
redirect(`/jobs/${canonicalSlug}`);
}
This 301 redirect is important. It means there is exactly one URL for each job, no matter how someone arrives at the page. Google only sees the canonical version with keywords in the URL.
3. Dynamic Sitemap Generation with Priority Tiers
A job board sitemap is not like a blog sitemap. You have thousands of pages that appear and disappear every day. Static sitemaps do not work. BirJob generates its sitemap dynamically by querying the database at request time.
But here is the thing most people get wrong about sitemaps: not all pages are equal. A job posted today matters more than a job posted three weeks ago. A sponsored job (which generates revenue) matters more than an aggregated one. I built a priority tier system:
const [freshJobs, olderJobs, sponsoredJobs, blogPosts, topCompanies] =
await Promise.all([
// Fresh jobs (last 7 days) -- higher priority
prisma.jobs_jobpost.findMany({
where: {
is_active: true,
last_seen_at: { gte: sevenDaysAgo }
},
orderBy: { last_seen_at: 'desc' },
take: 5000,
select: { id: true, title: true, company: true, last_seen_at: true },
}),
// Older jobs (7-30 days) -- lower priority
prisma.jobs_jobpost.findMany({
where: {
is_active: true,
last_seen_at: { gte: thirtyDaysAgo, lt: sevenDaysAgo }
},
orderBy: { last_seen_at: 'desc' },
take: 5000,
select: { id: true, title: true, company: true, last_seen_at: true },
}),
// Sponsored jobs -- highest job priority (revenue pages)
prisma.sponsored_job.findMany({
where: {
is_active: true,
payment_status: 'paid',
ends_at: { gte: new Date() }
},
orderBy: { created_at: 'desc' },
select: { id: true, title: true, company: true, created_at: true },
}),
// Published blog posts
prisma.blog_post.findMany({
where: { published: true },
orderBy: { published_at: 'desc' },
select: { slug: true, updated_at: true },
}),
// Top companies with >= 3 jobs
prisma.$queryRaw`
SELECT company FROM scraper.jobs_jobpost
WHERE is_active = TRUE AND company IS NOT NULL AND company != ''
GROUP BY company HAVING COUNT(*) >= 3
ORDER BY COUNT(*) DESC LIMIT 200
`,
]);
The priority values assigned to each tier:
- Sponsored jobs: 0.9 -- These are revenue-generating pages. Google should crawl them first.
- Category pages: 0.8 -- Evergreen landing pages with high keyword value.
- Fresh jobs (under 7 days): 0.7 -- Time-sensitive, worth crawling daily.
- Blog posts: 0.7 -- Long-form content that ranks for informational queries.
- Company pages: 0.6 -- Updated daily, useful for brand-name searches.
- Older jobs (7-30 days): 0.5 -- Still relevant, but less urgent to re-crawl.
- Static pages (about, privacy, terms): 0.3-0.5 -- Rarely change.
The order in the returned array also matters. Google processes sitemaps top-to-bottom in many cases, so I deliberately place sponsored jobs before fresh jobs, and fresh jobs before older ones:
return [
...STATIC_PAGES,
...categoryPages,
...companyPages,
...sponsoredPages, // sponsored first
...freshJobPages,
...blogPostPages,
...olderJobPages,
];
One mistake I made early on: using new Date() as the lastModified value
for static pages. This tells Google "this page changed right now" on every single crawl, which wastes
crawl budget on pages that have not actually changed. I fixed this by hardcoding dates for static pages.
// Static pages with fixed last-modified dates
// (don't use new Date() -- wastes crawl budget)
const STATIC_PAGES = [
{
url: 'https://www.birjob.com',
lastModified: new Date('2026-03-20'),
changeFrequency: 'daily',
priority: 1.0
},
// ...
];
4. Meta Tags and Open Graph for Job Pages
Every job page on BirJob generates its own meta tags dynamically. This is not optional for a job board. Generic titles like "Job Listing - BirJob" are useless. You need the job title and company name in the title tag because that is what people search for.
Here is how the job detail page generates its metadata:
export async function generateMetadata({ params }): Promise<Metadata> {
const id = idFromSlug(params.id);
if (isNaN(id)) return {};
const job = await fetchJobById(id);
if (!job) return {};
const canonicalSlug = slugifyJobUrl(job.title, job.company, job.id);
const canonicalUrl = `https://www.birjob.com/jobs/${canonicalSlug}`;
const lastSeen = new Date(job.last_seen_at);
const daysAgo = Math.floor(
(Date.now() - lastSeen.getTime()) / (1000 * 60 * 60 * 24)
);
const freshness = daysAgo === 0
? 'Bugün yeniləndi'
: daysAgo === 1
? 'Dünən yeniləndi'
: `${daysAgo} gün əvvəl yeniləndi`;
return {
title: job.title,
description: `${freshness} -- ${job.company} sirkətində ${job.title} vakansiyasi`,
keywords: [
`${job.company} vakansiya`,
`${job.title} vakansiyasi`,
`${job.company} ${job.title}`,
],
openGraph: {
title: `${job.title} -- ${job.company}`,
description: `${job.company} sirkətində ${job.title} vakansiyasi.`,
url: canonicalUrl,
images: [{ url: '/images/og-image.jpg', width: 1200, height: 630 }],
},
alternates: { canonical: canonicalUrl },
};
}
A few deliberate choices here:
- Freshness in the meta description. Including "Updated today" or "Updated 3 days ago" makes the search result snippet feel current. In a market where job seekers are suspicious of stale listings, this matters.
- Company + title in Open Graph. When someone shares a BirJob link on LinkedIn or WhatsApp (both popular in Azerbaijan), the preview card shows both the job title and company. This dramatically increases click-through on shared links.
-
Noindex for inactive jobs. When a job expires, the page stays up (someone might
have bookmarked it), but we add
robots: { index: false, follow: false }. No point polluting the index with dead listings.
...(job.is_active === false && {
robots: { index: false, follow: false }
}),
The root layout also sets up site-wide meta configuration with a title template:
export const metadata: Metadata = {
title: {
template: '%s | BirJob',
default: 'BirJob -- Azərbaycan Vakansiyalari',
},
description: 'Azərbaycanda is axtarısı 2026...',
metadataBase: new URL('https://www.birjob.com'),
// ...
};
The template: '%s | BirJob' pattern means every page title automatically gets
" | BirJob" appended, creating consistent branding across all search results without repeating
the code on every page.
5. Structured Data (JSON-LD) for Job Postings
This is the single highest-impact SEO thing I did. Google has a dedicated "Jobs" rich result type.
If your job posting pages have proper JobPosting structured data, they can appear in
Google's job search experience, which shows up directly in search results with a dedicated UI.
Here is the actual JSON-LD schema generated for each job page:
const jobSchema = {
'@context': 'https://schema.org',
'@type': 'JobPosting',
'@id': canonicalUrl,
title: job.title,
datePosted: postedDate.toISOString(),
validThrough,
description: `${job.title} vakansiyasi ${job.company} sirkətində...`,
hiringOrganization: {
'@type': 'Organization',
name: job.company
},
jobLocation: {
'@type': 'Place',
address: {
'@type': 'PostalAddress',
streetAddress: 'Baki',
addressLocality: 'Baki',
addressRegion: 'Baki',
postalCode: 'AZ1000',
addressCountry: 'AZ',
},
},
employmentType: 'FULL_TIME',
url: canonicalUrl,
applicationContact: {
'@type': 'ContactPoint',
url: job.apply_link
},
};
The validThrough field is calculated as 30 days after posting for active jobs. For
inactive jobs, it is set to the last_seen_at timestamp:
const validThrough = job.is_active
? new Date(postedDate.getTime() + 30 * 24 * 60 * 60 * 1000).toISOString()
: new Date(job.last_seen_at).toISOString();
For sponsored jobs (paid listings posted directly by HR managers), the schema includes additional fields:
// Sponsored job schema includes salary when available
...(job.salary ? {
baseSalary: {
'@type': 'MonetaryAmount',
currency: 'AZN',
value: {
'@type': 'QuantitativeValue',
description: job.salary,
unitText: 'MONTH'
},
},
} : {}),
directApply: true,
The directApply: true flag on sponsored jobs tells Google that candidates can apply
directly through BirJob, which gets preferential treatment in Google's job search results.
Beyond individual job pages, the homepage carries its own structured data: a WebSite
schema with a SearchAction (which enables the "search this site" feature in Google
results) and an Organization schema:
const websiteSchema = {
'@context': 'https://schema.org',
'@type': 'WebSite',
name: 'BirJob',
url: 'https://www.birjob.com',
potentialAction: {
'@type': 'SearchAction',
target: {
'@type': 'EntryPoint',
urlTemplate: 'https://www.birjob.com/?search={search_term_string}',
},
'query-input': 'required name=search_term_string',
},
};
Category pages use ItemList and BreadcrumbList schemas. Company pages
add an Organization schema for the company itself. Every page type has its own
structured data. This was tedious to implement, but the coverage in Google's rich results
made it worthwhile.
6. Category Pages as SEO Landing Pages
This is a strategy that many job boards overlook. Instead of only having a homepage with a search box, create dedicated category pages that target specific keyword clusters.
BirJob has a set of predefined categories at /vakansiyalar/[category]. Each one is
a proper landing page with its own meta tags, structured data, and content. Here are some of the
categories:
/vakansiyalar/it-- IT / Texnologiya/vakansiyalar/maliyye-- Maliyyə / Mühasibat/vakansiyalar/satis-- Satis / Marketing/vakansiyalar/huquq-- Hüquq/vakansiyalar/muhendislik-- Mühəndislik/vakansiyalar/bank-- Bank / Sigorta/vakansiyalar/insan-resurslari-- Insan Resurslari/vakansiyalar/tibb-- Tibb / Səhiyyə
Each category has a rich definition that includes keywords for matching, metadata for SEO, and a human-readable description:
export const CATEGORIES: Category[] = [
{
slug: 'it',
label: 'IT / Texnologiya',
keywords: [
'developer', 'software', 'proqram', 'DevOps',
'frontend', 'backend', 'fullstack', 'data analyst',
'network', 'sysadmin', 'qa', 'tester', 'python',
'java', 'javascript', 'kibertehlukezizlik',
],
description: 'Azərbaycanda IT sahəsindəki vakansiyalar...',
metaTitle: 'IT Vakansiyalari Baki',
metaDescription: 'Azərbaycanda IT vakansiyalari 2026...',
},
// ...
];
The category metadata generation is straightforward:
export async function generateMetadata({ params }): Promise<Metadata> {
const category = getCategoryBySlug(params.category);
if (!category) return {};
return {
title: category.metaTitle,
description: category.metaDescription,
alternates: {
canonical: `https://www.birjob.com/vakansiyalar/${category.slug}`
},
openGraph: {
title: `${category.metaTitle} | BirJob`,
description: category.metaDescription,
url: `https://www.birjob.com/vakansiyalar/${category.slug}`,
},
};
}
Why does this work? Because people do not always search for "vakansiyalar" (vacancies) generically. They search for "IT vakansiyalari Baki" or "bank vakansiyalari 2026". These category pages are purpose-built to rank for those queries.
The pages are statically generated at build time using generateStaticParams:
export async function generateStaticParams() {
return CATEGORIES.map((c) => ({ category: c.slug }));
}
This means they load instantly (served from CDN edge) and get the revalidation treatment:
export const revalidate = 300; // 5 minutes
Every 5 minutes, the page regenerates with fresh job data. The content stays current, the load time stays fast, and Google sees a page that genuinely updates regularly.
7. Company Pages for Brand-Name Searches
One of the most valuable search patterns in the Azerbaijani job market is "[Company Name] vakansiya". People search for specific companies: "PASHA Bank vakansiya", "Azercell is", "SOCAR vakansiya". If BirJob does not have a dedicated page for that company, we lose that traffic to the company's own careers page or to competitors.
So I built company pages at /companies/[slug]. The sitemap includes companies that
have at least 3 active job listings:
// Top companies with >= 3 jobs
prisma.$queryRaw`
SELECT company FROM scraper.jobs_jobpost
WHERE is_active = TRUE
AND company IS NOT NULL AND company != ''
GROUP BY company HAVING COUNT(*) >= 3
ORDER BY COUNT(*) DESC LIMIT 200
`
Each company page generates metadata that targets brand-name searches:
export async function generateMetadata({ params }): Promise<Metadata> {
const company = await findCompany(params.slug);
if (!company) return {};
return {
title: `${company} Vakansiyalari`,
description: `${company} sirkətinin aktiv vakansiyalari.
Baki, Azərbaycan -- BirJob-da pulsuz müraciət et.`,
alternates: {
canonical: `https://www.birjob.com/companies/${params.slug}`
},
};
}
The page also includes Organization, ItemList, and
BreadcrumbList structured data for each company. This gives Google maximum
context about what the page represents.
A subtle but important detail: company pages revalidate every hour
(export const revalidate = 3600) instead of every 5 minutes. Companies do not
post new jobs every 5 minutes, and the extra caching reduces database load.
8. The Blog as an SEO Channel
Job boards have a structural SEO problem: most of your pages are transient. A job posting lives for 2-4 weeks, then it disappears. Google indexes it, the listing expires, the page returns a 404 or noindex, and any link equity or ranking authority evaporates.
Blog content solves this. A blog post about "How to Write a CV in Azerbaijan" or "Top IT Companies Hiring in Baku" is evergreen. It ranks, accumulates backlinks, and channels that authority back to the domain.
BirJob's blog posts are included in the sitemap with a priority of 0.7 (same as fresh job listings) and a monthly change frequency:
const blogPostPages = blogPosts.map((post) => ({
url: `https://www.birjob.com/blog/${post.slug}`,
lastModified: post.updated_at,
changeFrequency: 'monthly',
priority: 0.7,
}));
The blog also serves an indirect SEO function: it gives people a reason to link to BirJob that is not just "here is a job listing". Nobody links to individual job posts. But they will link to a useful article about the job market.
9. Canonical URLs and Duplicate Content Prevention
Duplicate content is the silent killer of job board SEO. The same job can appear with different URL parameters (pagination, search filters, source filters), and if Google indexes all of those variants, you end up competing with yourself.
BirJob handles this at multiple levels:
Level 1: Canonical tags on every page
Every page explicitly declares its canonical URL. This is not optional. If you do not set a canonical, Google guesses, and Google often guesses wrong.
// Job page
alternates: { canonical: canonicalUrl }
// Category page
alternates: {
canonical: `https://www.birjob.com/vakansiyalar/${category.slug}`
}
// Homepage
alternates: { canonical: 'https://www.birjob.com' }
Level 2: Noindex for filtered and paginated views
The homepage can be accessed with query parameters like ?page=2,
?search=developer, or ?source=linkedin. These are useful for users
but should not be indexed separately. The metadata generator handles this:
export async function generateMetadata({ searchParams }): Promise<Metadata> {
const page = Math.max(1, parseInt(searchParams.page || '1'));
const search = searchParams.search || '';
const source = searchParams.source || '';
// Paginated or filtered pages -- noindex to avoid duplicate content
if (page > 1 || search || source) {
return {
...BASE_META,
robots: { index: false, follow: true },
};
}
return BASE_META;
}
Note the follow: true even on noindexed pages. This tells Google "do not index this
page, but do follow the links on it." The job links on page 2 of search results should still be
crawled and indexed. You just do not want page 2 itself competing with page 1.
Level 3: Slug-based redirects for non-canonical URLs
As mentioned earlier, if someone visits /jobs/12345 (old numeric format) or any
URL where the slug does not match the canonical slug, the server issues a 301 redirect. This
consolidates all link equity to a single URL per job.
10. robots.txt Strategy
Most developers either do not think about robots.txt at all, or they block too much. Here is BirJob's approach:
export default function robots(): MetadataRoute.Robots {
return {
rules: [
{
userAgent: '*',
allow: '/',
disallow: [
'/admin',
'/api/',
'/hr/', // authenticated HR dashboard
'/candidate/', // authenticated candidate dashboard
'/auth/', // login/register pages
'/unsubscribe',
'/?page=', // paginated homepage variants
'/?search=', // search result variants
'/payments/', // payment flow pages
'/blog/*/export', // export pages for Medium import
],
},
// Block AI training crawlers
{ userAgent: 'GPTBot', disallow: '/' },
{ userAgent: 'Google-Extended', disallow: '/' },
{ userAgent: 'CCBot', disallow: '/' },
{ userAgent: 'anthropic-ai', disallow: '/' },
],
sitemap: 'https://www.birjob.com/sitemap.xml',
host: 'https://www.birjob.com',
};
}
The logic:
- Allow everything public by default. The homepage, job pages, category pages, company pages, blog -- all open for crawling.
- Block authenticated areas. The HR dashboard, candidate panel, admin area, auth pages, and payment flows have zero SEO value. They are behind authentication anyway, but explicitly blocking them prevents wasted crawl budget.
-
Block query parameter variants.
/?page=and/?search=are blocked at the robots.txt level as a belt-and-suspenders approach alongside the noindex meta tags. This reduces crawl budget waste, especially if Google discovers these URLs through internal links before hitting the noindex tag. - Block AI training crawlers. GPTBot, Google-Extended, CCBot, anthropic-ai. BirJob's content should not be used to train AI models. The job data belongs to the companies and candidates who posted it.
- Include the sitemap reference. This is trivial but surprisingly often forgotten. Google's documentation says the sitemap location in robots.txt is one of the most reliable ways for crawlers to discover it.
11. Page Speed and Core Web Vitals on Vercel
BirJob is a Next.js app deployed on Vercel. This gets you a lot of performance for free:
- Edge-served static pages. Category pages and company pages are ISR (Incremental Static Regeneration). First request builds the page; subsequent requests are served from the CDN edge in milliseconds.
- Automatic image optimization. Next.js optimizes and lazy-loads images.
-
Font optimization. The Inter font is loaded via
next/font/google, which self-hosts the font files and eliminates the Google Fonts render-blocking request. -
Script loading strategy. Google Analytics is loaded with
strategy="afterInteractive", meaning it does not block the initial render.
<Script
src="https://www.googletagmanager.com/gtag/js?id=G-0HE9KS2GV4"
strategy="afterInteractive"
/>
There are also small details that add up:
- DNS prefetching for Google Analytics domains to reduce connection time:
<link rel="dns-prefetch" href="//www.google-analytics.com" />
<link rel="dns-prefetch" href="//www.googletagmanager.com" />
- Service worker registration for offline-capable PWA support:
<Script id="sw-register" strategy="afterInteractive">
{`if('serviceWorker' in navigator){
navigator.serviceWorker.register('/sw.js')
}`}
</Script>
Core Web Vitals matter for rankings, but I want to be honest about something: in a small market like Azerbaijan, where there is limited competition for many queries, page speed is more of a tiebreaker than a decisive factor. If your content is relevant and your page loads in under 3 seconds, you are probably fine. I do not spend hours chasing a 5-point improvement on Lighthouse scores. The ROI is not there.
12. What Actually Moved Rankings vs. What Did Not Matter
After running BirJob for a while and watching Google Search Console data, here is my honest assessment of what mattered:
High impact
-
JobPosting structured data. The single biggest win. Before adding JSON-LD
JobPostingschemas, BirJob was invisible in Google's job search experience. After adding them, impressions on job-related queries increased significantly. Google's job search UI is essentially a free listing if you have the right markup. - Category pages. These rank for queries that the homepage alone cannot target. "IT vakansiyalari Baki" goes to the IT category page, not the homepage. This is basic keyword segmentation, but it works remarkably well.
-
Human-readable URLs with keywords. URLs like
/jobs/muhasib-kapital-bank-12345carry keyword signal and improve CTR in search results because users can see what the page is about before clicking. - Canonical URL enforcement. Preventing duplicate content from eating into crawl budget and diluting page authority.
Medium impact
- Freshness signals in meta descriptions. Including "Updated today" in descriptions improved CTR from search results, but it is hard to isolate the ranking impact vs. the CTR impact.
- Company pages. These rank for brand-name searches, but the volume is lower than generic job queries. Still worth building because the intent is very high -- someone searching "Azercell vakansiya" is ready to apply.
- Open Graph tags. These do not affect Google rankings directly, but they increase traffic from social shares, which indirectly helps.
Low or zero impact
- Meta keywords tag. I still include them because they take two minutes to add and do no harm, but Google has officially ignored meta keywords for over a decade. I keep them mostly for Yandex, which may still use them marginally.
- Sitemap priority values. Google has said they largely ignore the priority attribute. I still set them as a signal to other crawlers and as documentation for myself, but I would not spend time fine-tuning them.
- Chasing perfect Lighthouse scores. Going from 85 to 95 on Lighthouse did not produce any measurable ranking change. Going from 40 to 85 probably did, but I was already past that threshold by using Next.js and Vercel.
-
Geo meta tags. I include
geo.region,geo.placename, andICBMcoordinates for Baku. These are holdovers from an older era of local SEO. I doubt they do much anymore, but they cost nothing.
13. The Azerbaijani Language Challenge
If you are building for a non-English market, you already know that SEO is harder. Here is what makes Azerbaijan specifically tricky:
Three languages, one market
Job seekers in Azerbaijan search in Azerbaijani, Russian, and English. A single job posting for "Software Developer" might be searched as:
- Azerbaijani: "proqramci vakansiyasi Baki"
- Russian: "vakansii programmist Baku"
- English: "software developer jobs Baku"
BirJob's metadata targets all three. The root layout keywords include Azerbaijani, Russian-influenced, and English terms:
keywords: [
// Year-specific
'vakansiyalar 2026', 'is elanlari 2026', 'Azerbaijan jobs 2026',
// Core Azerbaijani
'is axtarisi Azerbaycan', 'vakansiyalar Baki', 'bos is yerleri',
// Roles
'proqramci vakansiyasi', 'muhasib isi', 'satis meneceri vakansiyasi',
// Sectors
'bank vakansiyalari', 'IT vakansiyalari Baki',
// Employers people search by name
'SOCAR vakansiya', 'Kapital Bank is', 'PASHA Bank ise qebul',
// English
'jobs Azerbaijan', 'Baku jobs', 'job search Baku',
]
Azerbaijani special characters in URLs
As shown earlier, the slugification process transliterates Azerbaijani characters. This is necessary because many systems do not handle characters like ə, ı, and ş well in URLs. But it creates an interesting SEO side effect: the slug "muhasib" does not perfectly match the search query "mühasib". Google is smart enough to handle this, but it is worth being aware of.
The lang="az" attribute
The root HTML element is set to lang="az" (Azerbaijani). This tells Google what
language the content is in. Combined with hrefLang tags, it helps Google serve
the right version to the right users:
<html lang="az">
<head>
<link rel="alternate" hrefLang="az" href="https://www.birjob.com" />
<link rel="alternate" hrefLang="x-default" href="https://www.birjob.com" />
</head>
</html>
The x-default hrefLang tells Google "this is also the default version for users
whose language is not specifically matched." Since BirJob does not have separate language
versions, both point to the same URL.
Year-specific keywords
This one surprised me. Adding the current year to keywords and meta descriptions has a measurable impact. People search for "vakansiyalar 2026" not just "vakansiyalar" because they want to make sure the results are current. I include the year in meta descriptions across the site.
14. Google Search Console Insights for Azerbaijan
Here are some patterns I have observed in Google Search Console that are specific to the Azerbaijani market:
- Mobile dominates. Over 80% of traffic is mobile. This is consistent with Azerbaijan's internet usage patterns, where smartphones are the primary internet device for most people. BirJob's viewport meta and responsive design are not nice-to-haves; they are essential.
- Search queries are long-tail. Instead of one head term dominating, traffic comes from thousands of specific queries: "[company name] vakansiya", "[job title] is Baki", "[sector] vakansiyalari 2026". This is why category pages and company pages matter so much -- they capture the long tail.
- Crawl budget matters at scale. With 5,000+ active job pages, stale listings, category pages, and company pages, the total indexed page count grows fast. Keeping expired jobs noindexed, blocking parameter variations, and using proper canonical tags keeps the index clean.
- Impressions spike on Sundays and Mondays. This makes sense: people start their job search before the work week begins. Publishing new blog content on Saturday or Sunday to catch this wave has been effective.
- Google's job search experience is unevenly distributed. For some queries, Google shows the jobs rich result; for others, it shows regular search results. The pattern seems to depend on whether Google has enough structured data from multiple sources for that query. Being one of the few sites in Azerbaijan with proper JobPosting schema gives BirJob an outsized advantage in the rich results that do appear.
Final Thoughts
SEO for a job board is not fundamentally different from SEO for any other content-heavy site. The principles are the same: make sure Google can find your pages (sitemap), understand your pages (structured data, meta tags), and trust your pages (canonical URLs, no duplicate content, fast load times).
What makes job boards unique is the volume and velocity of content. Pages appear and disappear daily. If your SEO infrastructure cannot handle that -- if your sitemaps are static, if expired jobs stay indexed, if your URLs are ugly -- you are fighting with one hand tied behind your back.
If I had to pick three things that moved the needle the most for BirJob:
- JobPosting structured data -- Gets you into Google's job search experience. This is table stakes.
- Keyword-rich URL slugs with canonical enforcement -- Clean URLs with redirects ensure every job has exactly one authoritative URL.
- Category pages as landing pages -- Captures long-tail queries that the homepage cannot serve.
Everything else -- Open Graph tags, blog content, page speed optimization, geo meta tags, robots.txt tuning -- is useful but secondary. Get the three big things right first. Then iterate.
If you are building a job board or a content aggregator, especially in a non-English market, I hope this was useful. The code examples are real. The results are real. SEO is not magic. It is engineering.
This article was written by the developer behind BirJob.com, Azerbaijan's job aggregator. Questions or corrections? Reach out via the contact page.
