V LOVER Free Download (Build 18967189+DLC)


V LOVER Pre-Installed Worldofpcgames

V LOVER Direct Download:

V LOVER You are a university freshman who loves watching VTuber live streams. You entered Big Mountain Production by chance and became a manager, meeting three unique VTubers and the actual person behind the character. Over the next year, lead them towards the path of realizing their dreams! In addition to working hard to earn money and managing channels, you can also leverage your reliable charm as a manager to gradually enter the hearts of the three girls, even developing relationships with them that goes beyond “VTuber and manager”! Exquisite fully animated Hentai scenes 25 H scenes More than 180 CGs including variants Interactive CG in dynamic Three-line parallel development system Full voice acting for the female lead. Players need to arrange the weekly schedules for the three heroines according to their abilities and goals within a year in the game, while paying attention to their physical and mental health. JerezArena 3 

If players fail to relieve their stress in time, irreversible and significant consequences will occur… Special Date System: Fulfilling specific conditions before a designated date, such as the heroine’s birthday, Christmas, or Tanabata, can trigger special storylines! -Ultra-realistic Live Streaming Scenes- With Live2D model and realistic live streaming room settings, players can experience ultra-realistic live streaming. VTubers will encounter various unexpected events during live streams, and players need to provide appropriate action suggestions based on the heroines’ characteristics!

Features and System Requirements:

  • Dive into an emotionally rich story with multiple endings shaped by your choices.
  • Build relationships with diverse characters, each with unique personalities, backstories, and secrets to uncover.
  • Enjoy high-quality character illustrations, animated scenes, and romantic CG moments that enhance the visual storytelling.
  • Customize your avatar or choose your name to personalize your experience and feel part of the story.

Screenshots

System Requirements

Recommended
OS *: Windows7/8/10/11
Processor: Intel Core i3
Memory: 4 GB RAM
Graphics: NVIDIA GeForce GTX 560 / AMD Radeon HD 6870 DirectX 版本: 11
Storage: 10.3 GB available space
Sound Card: DirectSound

Installation Guide

Turn Off Your Antivirus Before Installing Any Game

1 :: Download Game
2 :: Extract Game
3 :: Launch The Game
4 :: Have Fun 🙂

Gemma 3 vs. MiniCPM vs. Qwen 2.5 VL


Introduction

Vision-Language Models (VLMs) are rapidly becoming the core of many generative AI applications, from multimodal chatbots and agentic systems to automated content analysis tools. As open-source models mature, they offer promising alternatives to proprietary systems, enabling developers and enterprises to build cost-effective, scalable, and customizable AI solutions.

However, the growing number of VLMs presents a common dilemma: how do you choose the right model for your use case? It’s often a balancing act between output quality, latency, throughput, context length, and infrastructure cost.

This blog aims to simplify the decision-making process by providing detailed benchmarks and model descriptions for three leading open-source VLMs: Gemma-3-4B, MiniCPM-o 2.6, and Qwen2.5-VL-7B-Instruct. All benchmarks were run using Clarifai’s Compute Orchestration, our own inference engine, to ensure consistent conditions and reliable comparisons across models.

Before diving into the results, here’s a quick breakdown of the key metrics used in the benchmarks. All results were generated using Clarifai’s Compute Orchestration on NVIDIA L40S GPUs, with input tokens set to 500 and output tokens set to 150.

  1. Latency per Token: The time it takes to generate each output token. Lower latency means faster responses, especially important for chat-like experiences.
  2. Time to First Token (TTFT): Measures how quickly the model generates the first token after receiving the input. It impacts perceived responsiveness in streaming generation tasks.
  3. End-to-End Throughput: The number of tokens the model can generate per second for a single request, considering the full request processing time. Higher end-to-end throughput means the model can efficiently generate output while keeping latency low.
  4. Overall Throughput: The total number of tokens generated per second across all concurrent requests. This reflects the model’s ability to scale and maintain performance under load.

Now, let’s dive into the details of each model, starting with Gemma-3-4B.

Gemma3-4b

Gemma-3-4B, part of Google’s latest Gemma 3 family of open multimodal models, is designed to handle both text and image inputs, producing coherent and contextually rich text responses. With support for up to 128K context tokens, 140+ languages, and tasks like text generation, image understanding, reasoning, and summarization, it’s built for production-grade applications across diverse use cases.

Benchmark Summary: Performance on L40S GPU

Gemma-3-4B continues to show strong performance across both text and image tasks, with consistent behavior under varying concurrency levels. All benchmarks were run using Clarifai’s Compute Orchestration with input size of 500 tokens and output size of 150 tokens. Gemma-3-4B is optimized for low-latency text processing and handles image inputs up to 512px with stable throughput across concurrency levels.

Text-Only Performance Highlights:

  • Latency per token: 0.022 sec (1 concurrent request)

  • Time to First Token (TTFT): 0.135 sec

  • End-to-end throughput: 202.25 tokens/sec

  • Requests per minute (RPM): Up to 329.90 at 32 concurrent requests

  • Overall throughput: 942.57 tokens/sec at 32 concurrency

Multimodal (Image + Text) Performance (Overall Throughput):

  • 256px images: 718.63 tokens/sec, 252.16 RPM at 32 concurrency

  • 512px images: 688.21 tokens/sec, 242.04 RPM

Scales with Concurrency (End-to-End Throughput):

  • At 2 concurrent requests:

  • At 8 concurrent requests:

  • At 16 concurrent requests:

  • At 32 concurrent requests:

Overall Insight:

Gemma-3-4B provides fast and reliable performance for text-heavy and structured vision-language tasks. For large image inputs (512px), performance remains stable, but you may need to scale compute resources to maintain low latency and high throughput.

If you’re evaluating GPU performance for serving this model, we’ve published a separate comparison of A10 vs. L40S, helping you choose the best hardware for your needs.

gemma_throughput_trimmed

MiniCPM-o 2.6

MiniCPM-o 2.6 represents a major leap in end-side multimodal LLMs. It expands input modalities to images, video, audio, and text, offering real-time speech conversation and multimodal streaming support.

With an architecture integrating SigLip-400M, Whisper-medium-300M, ChatTTS-200M, and Qwen2.5-7B, the model boasts a total of 8 billion parameters. MiniCPM-o-2.6 demonstrates significant improvements over its predecessor, MiniCPM-V 2.6, and introduces real-time speech conversation, multimodal live streaming, and superior efficiency in token processing.

Benchmark Summary: Performance on L40S GPU

All benchmarks were run using Clarifai’s Compute Orchestration with input size of 500 tokens and output size of 150 tokens. MiniCPM-o-2.6 performs exceptionally well across both text and image workloads, scaling smoothly across concurrency levels. Shared vLLM serving provides significant gains in overall throughput while maintaining low latency.

Text-Only Performance Highlights:

  • Latency per token: 0.022 sec (1 concurrent request)

  • Time to First Token (TTFT): 0.087 sec

  • End-to-end throughput: 213.23 tokens/sec

  • Requests per minute (RPM): Up to 362.83 at 32 concurrent requests

  • Overall throughput: 1075.28 tokens/sec at 32 concurrency

Multimodal (Image + Text) Performance (Overall Throughput):

  • 256px images: 1039.60 tokens/sec, 353.19 RPM at 32 concurrency

  • 512px images: 957.37 tokens/sec, 324.66 RPM

Scales with Concurrency (End-to-End Throughput):

  • At 2 concurrent requests:

  • At 8 concurrent requests:

  • At 16 concurrent requests:

  • At 32 concurrent requests:

Overall Insight:

MiniCPM-o-2.6 performs reliably across a range of tasks and input sizes. It maintains low latency, scales linearly with concurrency, and remains performant even with 512px image inputs. This makes it a solid choice for real-time applications running on modern GPUs like the L40S. These results reflect performance on that specific hardware configuration, and may vary depending on the environment or GPU tier.

minicpm_throughput_vs_concurrency_trimmed

Qwen2.5-VL-7B-Instruct

Qwen2.5-VL is a vision-language model designed for visual recognition, reasoning, long video analysis, object localization, and structured data extraction.

Its architecture integrates window attention into the Vision Transformer (ViT), significantly improving both training and inference efficiency. Additional optimizations like SwiGLU activation and RMSNorm further align the ViT with the Qwen2.5 LLM, enhancing overall performance and consistency.

Benchmark Summary: Performance on L40S GPU

Qwen2.5-VL-7B-Instruct delivers consistent performance across both text and image-based tasks. Benchmarks from Clarifai’s Compute Orchestration highlight its ability to handle multimodal inputs at scale, with strong throughput and responsiveness under varying concurrency levels.

Text-Only Performance Highlights:

  • Latency per token: 0.022 sec (1 concurrent request)

  • Time to First Token (TTFT): 0.089 sec

  • End-to-end throughput: 205.67 tokens/sec

  • Requests per minute (RPM): Up to 353.78 at 32 concurrent requests

  • Overall throughput: 1017.16 tokens/sec at 32 concurrency

Multimodal (Image + Text) Performance (Overall Throughput):

  • 256px images: 854.53 tokens/sec, 318.64 RPM at 32 concurrency

  • 512px images: 832.28 tokens/sec, 345.98 RPM

Scales with Concurrency (End-to-End Throughput):

  • At 2 concurrent requests:

  • At 8 concurrent requests:

  • At 16 concurrent requests:

  • At 32 concurrent requests:

Overall Insight:

Qwen2.5-VL-7B-Instruct is well-suited for both text and multimodal tasks. While larger images introduce latency and throughput trade-offs, the model performs reliably with small to medium-sized inputs even at high concurrency. It’s a strong choice for scalable vision-language pipelines that prioritize throughput and moderate latency.

qwen_throughput_vs_concurrency_trimmed

Which VLM is Right for You?

Choosing the right Vision-Language Model (VLM) depends on your workload type, input modality, and concurrency requirements. All benchmarks in this report were generated using NVIDIA L40S GPUs via Clarifai’s Compute Orchestration.

These results reflect performance on enterprise-grade infrastructure. If you’re using lower-end hardware or targeting larger batch sizes or ultra-low latency, actual performance may differ. It’s important to evaluate based on your specific deployment setup.

MiniCPM-o-2.6
MiniCPM offers consistent performance across both text and image tasks, especially when deployed with shared vLLM. It scales efficiently up to 32 concurrent requests, maintaining high throughput and low latency even with 1024px image inputs.

If your application requires stable performance under load and flexibility across modalities, MiniCPM is the most well-rounded choice in this group.

Gemma-3-4B
Gemma performs best on text-heavy workloads with occasional image input. It handles concurrency well up to 16 requests but begins to dip at 32, particularly with large images such as 2048px.

If your use case is primarily focused on fast, high-quality text generation with small to medium image inputs, Gemma provides strong performance without needing high-end scaling.

Qwen2.5-VL-7B-Instruct
Qwen2.5 is optimized for structured vision-language tasks such as document parsing, OCR, and multimodal reasoning, making it a strong choice for applications that require precise visual and textual understanding.

If your priority is accurate visual reasoning and multimodal understanding, Qwen2.5 is a strong fit, especially when output quality matters more than peak throughput.

To help you compare at a glance, here’s a summary of the key performance metrics for all three models at 32 concurrent requests across text and image inputs.

Vision-Language Model Benchmark Summary (32 Concurrent Requests, L40S GPU)

 

 

Metric Model Text Only 256px Image 512px Image
Latency per Token (sec) Gemma-3-4B 0.027 0.036 0.037
MiniCPM-o 2.6 0.024 0.026 0.028
Qwen2.5-VL-7B-Instruct 0.025 0.032 0.032
Time to First Token (sec) Gemma-3-4B 0.236 1.034 1.164
MiniCPM-o 2.6 0.120 0.347 0.786
Qwen2.5-VL-7B-Instruct 0.121 0.364 0.341
End-to-End Throughput (tokens/s) Gemma-3-4B 168.45 124.56 120.01
MiniCPM-o 2.6 188.86 176.29 160.14
Qwen2.5-VL-7B-Instruct 186.91 179.69 191.94
Overall Throughput (tokens/s) Gemma-3-4B 942.58 718.63 688.21
MiniCPM-o 2.6 1075.28 1039.60 957.37
Qwen2.5-VL-7B-Instruct 1017.16 854.53 832.28
Requests per Minute (RPM) Gemma-3-4B 329.90 252.16 242.04
MiniCPM-o 2.6 362.84 353.19 324.66
Qwen2.5-VL-7B-Instruct 353.78 318.64 345.98

 

Note: These benchmarks were run on L40S GPUs. Results may vary depending on GPU class (such as A100 or H100), CPU limitations, or runtime configurations including batching, quantization, or model variants.

Conclusion

We have seen the benchmarks across MiniCPM-2.6, Gemma-3-4B, and Qwen2.5-VL-7B-Instruct, covering their performance on latency, throughput, and scalability under different concurrency levels and image sizes. Each model performs differently based on the task and workload requirements.

If you want to try out these models, we have launched a new AI Playground where you can explore them directly. We will continue adding the latest models to the platform, so keep an eye on our updates and join our Discord community for the latest announcements.

If you are also looking to deploy these Open Source VLMs on your own dedicated compute, our platform supports production-grade inference, and scalable deployments. You can quickly get started with setting up your own node pool and running inference efficiently. Check out the tutorial below to get started.

 



How To Find The Secret Terraria Dungeon In Palworld


The new Palworld: Tides of Terraria update is live with tons of new features, and there is also a secret location you can discover on the map to teleport you to a new Terraria-themed dungeon. Here we guide you to the hidden location, where you can capture all-new Pals themed around Terraria and fight the Eye of Cthulhu boss. The dungeon rewards also have a chance to reward you with the Terra Blade sword schematic, Celestial Sigil schematic, Hallowed armor, and more.

How to find the secret Terraria Dungeon in Palworld

Fisherman's Point
Fisherman’s Point

As shown in the image above, the closest fast travel point will be at the Fisherman’s Point, which is a location just south of the volcano. Traveling southeast from Fisherman’s Point will allow you to uncover a new island on your map with a fast travel point called the Sealed Realm of Terraria.

This location is technically two islands. The larger portion is a great place to explore and capture two of the update’s new Pals: Palumba and Finsider.

Terraria dungeon entrance
Terraria dungeon entrance

The smaller of the two islands has a large statue and a new dungeon point (glowing blue ring) you can use to teleport you to the secret Terraria location.

Pals and treasure inside

Here you can catch several new Pals:

  • Herbil
  • Green Slime
  • Blue Slime
  • Purple Slime
  • Red Slime
  • Cave Bat
  • Demon Eye
  • Enchanted Sword

Chests in the dungeon have a chance to reward new Hallowed bars and Hallowed armor inspired by Terraria.

Additionally, there are also random chests you want to search for extra loot and Hallowed bars. The entire dungeon is filled with decor that looks like honey pots. You also want to take the time to break these. Each pot usually has two to three Hallowed bars. You’ll want to collect as many as you can if you want to fight Palword’s new Terraria-themed raid boss.

Eye of Cthulhu boss

Breakable wall inside dungeon caveBreakable wall inside dungeon cave
Breakable wall inside dungeon cave

Continue through the dungeon until you reach a dead end with lots of colorful glowing crystals on the walls. As shown in the image above, this area will have sections of wall that are a gold color and look different from the rest of the wall. These are walls you can shoot and melee through to access other areas of the dungeon. This is how you’ll reach the Eye of Cthulhu boss at the end. Just keep checking all the pathways here. All of these unlocked passages will be a dead end except one that leads to the Eye of Cthulhu.

Eye of Cthulhu bossEye of Cthulhu boss
Eye of Cthulhu boss

This is a level 45 boss fight, so make sure you’re leveled up with your best Pals equipped before tackling this fight. If you are a fairly new and low-level player, you’ll want to level up a bit first.

Just like other dungeon bosses, you can capture the Eye of Cthulhu as a Pal.

Terraria dungeon rewards

Defeating the Eye of Cthulhu dungeon boss will reward you with chests, which have the chance to give you more Hallowed bars, armor, a schematic to craft the new Terra Blade sword, and more.

For more guides, make sure to check out how to fish in Palworld, change a Pal’s gender and passive skills, move Pals to different worlds, and how to upgrade your schematics.

Dell’s new Premium laptops replace its popular XPS PCs


Dell kicked off 2025 by rebranding all of its long-running laptop lines (and promptly getting roasted for it). Today, the company announced the first wave of successors to its popular XPS laptop series: the Dell Premium 14 and 16.

The two PCs feature Intel Core Ultra Series 2 processors, slightly larger 120Hz displays, and a tweaked Dell logo on their lids. (The brand name no longer has a circle around it.) Otherwise, they look almost identical to their forebears, the XPS 14 and 16, retaining their polarizing minimalist designs with gapless keyboards, seamless touchpads, and Platinum and Graphite finishes.

The Premium 14 and 16 are available now at Dell starting at $1,649.99 and $2,699.99, respectively, with delivery dates listed for mid- to late July.

Shop the new Dell Premium line:


the dell premium 14 laptop


the dell premium 16 laptop

The more portable Premium 14 features a 14.5-inch 2K LCD display, an Intel Core Ultra 7 CPU, integrated Intel Arc Graphics, 16GB to 64GB of RAM, up 512GB to 4TB of SSD storage, and newly added WiFi 7 support. Optional upgrades include an OLED touchscreen display and a dedicated Nvidia GeForce RTX 4050 GPU. It’s rated at up to 20 hours of battery life; you’ll get the most usage out of a non-OLED model.

Mashable Light Speed

At launch, this laptop maxes out at 32GB of RAM and 4TB of storage, but Dell said an option with 64GB of RAM will be available eventually.

the dell premium 14 laptop

The Dell Premium 14 in Platinum.
Credit: Dell

The more powerful Premium 16 offers a 16.3-inch 2K display, up to an Intel Core Ultra 9 processor, an RTX 5060 graphics card, 16GB to 64GB of RAM, and 512GB to 4TB of storage. It’s supposed to last up to 27 hours on a single charge, but its optional 4K OLED touchscreen display upgrade will bump that down a bit.

the dell premium 16 laptop

The Dell Premium 16 in Graphite.
Credit: Dell

Like the Premium 14, some of the more advanced specs aren’t available yet. Dell said the Premium 16 will be “available soon” in other GPU configurations, including models with Intel Arc 140T, RTX 5050, and RTX 5070 graphics. The latter comes with three Thunderbolt 5 ports and will be capable of supporting up to four 8K external displays.

Notably, both new Premium laptops have a neural processing unit (or NPU), but it caps out at 13 TOPS (trillions of operations per second, an AI performance metric). That’s well below the 40 TOPS threshold that would make them Copilot+ PCs. In other words, they won’t come with certain AI features that are available in other popular Windows laptops, like Recall, Studio Effects, and a Cocreator image generator in Microsoft Paint. For some shoppers, that might be a huge plus: In a fall 2024 survey conducted by Intel, 44% of respondents said they considered AI PCs to be “a gimmick or futuristic technology.”

The new Premium laptops aren’t to be confused with Dell’s Pro Premium laptops, which are business-oriented models that are, indeed, Copilot+ PCs.

Razer Iskur V2 X review


Razer is always incredibly on-brand, and the gloves it includes in the packaging for its chairs are no exception. They’re black, edged with the company’s somewhat distinctive bright green around the wrist-hole, and it would have been so easy for the company to just throw a plain black pair in there, or even not to bother at all.

It turns out they’re pretty cheap gloves, and a bit tight too, but their presence speaks to the level of thought and attention to detail that’s been put into this chair despite its budget nature.

How to Build an AI Assistant with Keith Moehring [MAICON 2025 Speaker Series]


MAICON brings together top visionaries and experts in the field of AI, during a three-day conference packed with actionable sessions and networking events—all to position you as the change agent your organization (and career) needs. In this ongoing speaker series, we’re featuring these extraordinary leaders, with forward-looking predictions, actionable tips you can use today, and a preview of their MAICON 2025 sessions. Continue reading “How to Build an AI Assistant with Keith Moehring [MAICON 2025 Speaker Series]”

What is the protagonist’s canon name in Persona 5 The Phantom X?


When you start Persona 5: The Phantom X, you’ll be tasked with naming the protagonist by writing down your name on a worksheet fairly early. If you’ve played Persona games before, you may be familiar with the protagonists’ “official” names from the series, and want to use the canon name for this game, too.

Lucky for you, the canon name should have been filled in already, but if you accidentally deleted it or you don’t know if this is the real deal, then we explain more about the P5X protagonist’s canon name below.

What’s the protagonist’s canon name in Persona 5: The Phantom X?

In Japanese, the canon name for the Persona 5: The Phantom X protagonist is Nagisa Kamishiro. However, this doesn’t fit in the name slot for the English version of the game — so the canon name for the global version is Nagisa Kamisiro.

Ultimately, you do not have to name your character Nagisa, of course. You can literally name him whatever you want. You can name him Zoo Smell, if you want. (Just make sure to change both his first name and last name, if you wanted to. We saw a lot of names in the beta where only the first name was changed, for some reason.)

It should be noted that since this is a game with online friend capability, the name you set is your account name. You may want to pick something unique or you’ll be one Nagisa Kamisiro in an ocean of them. If you plan on adding a bunch of friends in-game, you may want to set your name as something your buddies can identify.

That is all to say that “canon” has different definitions across the board. Persona protagonists have had varying names across media. For example, the Persona 4 protagonist is named Souji Seta in the manga, but Yu Narukami in Persona 4 Arena and the anime adaption. Similarly, the protagonist from Persona 5 was named Akira Kurusu in the manga adaption, but has canonically been referred to as Ren Amamiya, which was the default name in Persona 5 Royal. (Ren is fine, but to me he’ll always be Akira!)

It’s entirely possible that Nagisa will appear elsewhere in other media with a different name, but we’re basing this off of the default name available to us in the beta that we played.

Can you change the protagonist’s name in Persona 5: The Phantom X?

Yes! If you went with Nagisa (or Zoo Smell, by our recommendation) and you’re regretting it, you can change your name for free once. After the first name change, you’ll need to pay 200 Meta Jewels (the premium currency that you can earn by playing the game) for every subsequent change.

You can change your name by clicking your profile icon at the top of your phone screen menu, but only after you’ve progressed a bit in the story. From there, click the ellipses above your set icon to access the name change feature.

Better Models, Smarter Defaults: Claude Sonnet 4, GPT-4.1, and More Control in Visual Studio


We’re excited to share some major improvements to the Copilot experience in Visual Studio, including smarter default models, more choices, and easier ways to manage your usage.

Smarter default model

Copilot in Visual Studio now uses GPT-4.1 as the default model (previously 4o). In our testing, it delivers significantly better performance—faster responses, higher quality suggestions, and greater efficiency overall.

More models to choose from

Want to try something else? You now have access to an ever-broader range of models:

  • Claude Sonnet 4
  • Claude Opus 4
  • Claude Sonnet 3.5
  • Claude 3.7 (non-thinking and thinking)
  • OpenAI o3 mini
  • Gemini 2.0 Flash
  • Gemini 2.5 Pro

Model selections are now sticky, meaning your chosen model stays selected across threads for a smoother workflow. this documentation to learn about each models’ strengths. Give them a try and let us know what you think! 

We’ve also made it easier to enable and switch between models. If a model is available under your plan but not yet enabled, you’ll now see a prompt right in the model picker—no need to navigate to GitHub settings.

inline approval image

Usage management and billing updates

With GitHub’s new billing model, we’ve made a few updates in Visual Studio to help you stay in control.

Track Your Usage Easily

We’ve added a new Copilot Consumptions panel to help you monitor your usage. Just click the Copilot badge in the top-right corner of Visual Studio, then select Copilot Consumptions to see how many premium requests you’ve used across chat, inline suggestions, and more.

usage image

From there, you can also access Manage Plan to update your subscription or adjust settings on GitHub.com.  If you run out of your , you’ll automatically switch to a standard model, GPT-4.1, at no extra cost.

Note: Some models may count more heavily toward your quota—for example, number of requests consumed vary depending on the model used. You’ll see this clearly indicated in the model picker before you make a selection.

We hope these updates make Copilot in Visual Studio more powerful and transparent for you.
Have a favorite model? Tell us how you’re using it—we’d love to hear from you! 😊

Check out the new Visual Studio Hub

Stay connected with everything Visual Studio in one place! Visit the Visual Studio Hub for the latest release notes, YouTube videos, social updates, and community discussions.

We appreciate your feedback

Your feedback helps us improve Visual Studio, making it an even more powerful tool for developers. We are immensely grateful for your contributions and look forward to your continued support. By sharing your thoughts, ideas, and any issues you encounter through Developer Community, you help us improve and shape the future of Visual Studio.

Meta’s recruiting blitz claims three OpenAI researchers


In the fight for top AI talent, Meta just reportedly snagged a win, poaching three OpenAI researchers despite rival Sam Altman’s public mockery of Mark Zuckerberg’s lavish hiring tactics.

The latest victory in Zuckerberg’s widely-reported recruiting blitz: Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai – who established OpenAI’s Zurich office – have joined Meta’s superintelligence team, the WSJ reports, suggesting Zuckerberg’s methods can deliver.

As Altman, the CEO of OpenAI, first revealed in a recent podcast with his brother Jack, Zuckerberg has been dangling $100+ million compensation packages in an effort to lure top talent from OpenAI. The Journal subsequently reported that Zuckerberg has been personally WhatsApping hundreds of top AI researchers, coordinating targets through his “Recruiting Party 🎉” chat before hosting dinners at his homes in Palo Alto and Lake Tahoe.

The strategy is producing mixed results. Zuckerberg recently bagged Scale AI’s CEO Alexandr Wang with a $14 billion investment, making the 28-year-old one of tech’s priciest hires ever. But bigger game has eluded the Meta CEO, says the WSJ, including OpenAI co-founders Ilya Sutskever and John Schulman, both of whom have gone on to co-found newer startups.

In that podcast, Altman said of Zuckerberg’s charm campaign: “I’m really happy that, at least so far, none of our best people have decided to take him up on [those offers].”

How to Create a Believable World for Your Fiction Characters


A lot goes into creating a fantasy world—or a world for any story, regardless of genre. 

Every world needs its own distinct feel, whether it’s a microcosm of the one we already know, a distant past, a far-out future or a magical alternate world altogether. From Middle-earth, to Tatooine, to the scandalous world Bridgerton’s Regency London, it’s the author’s job is to make the world feel real and relevant to what’s happening with the characters and plot.

But what makes a fictional world feel real? There are a lot of different tools and approaches available to authors to help you in this important process.

What is world building?

When writing any story, one of the top jobs—and greatest challenges—the author takes on is to create a world that feels realistic and multi-dimensional.

Much more than a backdrop for the action, the story’s world is a crucial foundation to everything that takes place. What are the values in this world? What’s the structure of daily life look like? Who has privilege, and who’s left behind? What’s the economic system? What’s got value and what doesn’t? 

Whether it’s directly related to the plot of your story or not, these are the types of big questions that will round out your story’s world. You might be surprised at the ways these important dynamics emerge in subtle but important ways throughout the story.

How to start world building

There is no right or wrong way to create a world for your story. In fact, there are a lot of examples of incredible authors, all of whom go about the world building process in very different ways.

Here are a few examples:

E. Schwab: The author of “The Invisible Live of Addie LaRue” and other speculative genre fiction famously says she loves to write stories about outsiders — but to know who the outsiders of a fictional world are, one must start by understanding who its insiders are, and why. In this way, Schwab wisely starts to unfold her world from a characters-first perspective, starting with its most central values. To learn more about her process, start with this video. 

Margaret Atwood: The multi-award-winning author of “The Handmaid’s Tale” has said she starts her world building by thinking about how her character eats breakfast. What type of kitchen does the character have? Do they prepare their own food or does someone else? Where does their food come from? This process offers her a way to start peeking into the world’s economy and social structures, one step at a time. She shares how she builds out her world from this single moment of the day in this Fast Company article.

Chuck Wendig: Whereas many authors set aside time to map out their worlds before they begin writing, not all do! The author of “Wanderers” prefers to start tackling his stories from the characters and plot, and then revisits the draft to fill out the world building as needed. As he puts it, “the world serves the story, the story doesn’t serve the world.” He offers this and more great world building advice in this blog post.

Reading about other authors’ methods and talking to them about their process when you have opportunity is a great way to add to your own world building toolbox. But, as they say, your mileage may vary! Just because your favorite author does their world building a certain way, doesn’t mean it’s the right way for you to do it.

Give different methods a try, then don’t be afraid to stick with what works for you. In the end, all that matters is that the result is a world that brings the story to life for your readers.

8 tips for creative world building

If creating an entire world feels like a daunting challenge, here are some steps to get you started.

1. Study other authors at work in your genre

It’s important to read widely within the genre you write. As you do so, make a study of the ways other authors bring their worlds to life on the page.

How can you bring these lessons to your own writing?

2. Mix and match different worlds

If you need inspiration to get started, draw inspiration from the worlds you already know—whether those be fictional or real!

Then, use these elements as building blocks and start making it your own.

3. Draw a map of your story’s world

The geography of your world can be as important as the culture—and the two may even inform each other.

You don’t have to be an artist to develop a quick sketch that can help you navigate how the world comes together.

4. Consider what kinds of flora and fauna live in your world

What do the trees and other plants look like? Are some native to certain areas or only grow under certain conditions? What types of creatures exist there?

For worlds more like our own, this may require some careful research; but for more fantastical worlds, this can be an opportunity to set loose your wildest creativity. 

5. Outline your world’s background

How did your world become the way it is at the story’s start?

What is the government like? What about its financial systems? Are there different cultures intermingling? Are there fads or styles within this society?

6. Use all your senses

When we’re out in the real world, we experience it through our senses: sight, sound, smell, touch and taste. Your world will come to life for readers when you let them do the same in your fictional world.

If your character wanders through a market, what spices and herbs might mingle in the air? If your character is on a spaceship, what does the food taste like? If your character spends her weekends in the local coffee shop, how does her favorite table feel? These kinds of details within a world can help to make it feel more multidimensional and real.

A lot of writers fall into the trap of relying on just a few of the senses, like sight and touch. But as you revise your manuscript, look for opportunities to round out these details with the other senses, too. You don’t need to touch on all five senses for every aspect of your world (that would get tiresome pretty quickly) but added in at opportune moments, they can take a world that’s fine and turn it into something remarkable and memorable.

7. Reflect your world’s values

In the real world, values and bias are embedded so deeply we hardly even think about it in daily life—consider the ways in which the world is built for right-handed people, or, some of the phrases we still use from our history. Then of course, there are the complex consequences of racism, sexism other serious issues that continue to plague our society. For better or for worse, these all have connections to what’s really valued in our world. 

So what is valued in your fictional world? Who holds power and influence? Who doesn’t? How are these values reinforced? These small touches can demonstrate important things about your story’s world without having to hit pause and explain it all.

8. Explore thematic elements

Every story has a theme. Your world building should support a deeper exploration of those elements. Look for opportunities for the greater world of your story to reflect, build, and deepen these big questions.

For example, in “The Hunger Games,” the story isn’t only about Katniss. It’s also about power dynamics, control and what it takes to survive. As the series goes on, it also wrestles with themes of trauma and the costs of war and freedom. These themes are reinforced by the details of the story’s world from where we start with Katniss in District 12, to the Capitol, to their fight in the rebellion.

These are only a few examples of ways to explore your world and make it more multidimensional. With these and other exercises, you may surprise yourself with the ideas you come up with, and how complex your world becomes. The more you’re able to consider all aspects of your story’s world, the more dynamic and life-like it will feel to readers. 

Bench in a purple park, text about creating a believable world

There are myriad tools and resources for world building available to help you build your skills and flesh out your story. Here are a few excellent places to start:

  • Brandon Sanderson’s BYU lecture series – This leading fantasy author is renowned for his complex fantasy worlds. In this six-part series for students at Brigham Young University, two of his lectures are dedicated to world building. They offer a wealth of information on building compelling worlds, as well as a peek behind the curtain of how a master (and bestseller) gets it done.
  • World Building Reddit – This subreddit is an active community of creatives for all sorts of speculative fiction and world-building endeavors, from authors to gamemasters and more. It’s a great source for insights, support and inspiration within a community of like-minded creators from across the expanses of the Internet.
  • World building software – Did you know there’s software designed to help you through the world building process? In fact, this great list from ProWritingAid lists multiple you can choose from, depending on your creative style.
  • World building templates – Many have created their own versions of templates, questions and prompts to help authors build out their worlds—there’s something out there for everyone! But it can also be a deluge that’s hard to navigate. I like this organized list of points to consider from Amelia Weins on the Science Fiction Writers Association’s blog, which prioritizes considerations for diversity.
  • Tracking tools for world building – Maintaining consistency within your story’s world is crucial for making it feel real. So how will you remember on page 227 the color of the wallpaper in a shop your character is revisiting from chapter two? There are tools for that. This article breaks down a few ways to approach it (full disclosure, written by this author).

How to reveal your world to readers

 Once you’ve built your world, you now must introduce it to your readers through your story. The best rule of thumb for sharing key details about your story’s world is to reveal it as it becomes needed.

While certain classic fantasy authors are notorious for their extensive detours into elaborate detours into backstory (looking at you, Tolkien), most readers respond better to brief glimpses into backstory, revealed as naturally as possible, as it becomes important to the plot and character’s development.

You may even find that full threads of your world’s history or culture never make it into the manuscript at all—and that’s OK! It was still well worth the effort if it helped you to create a world rich enough for readers to inhabit. You can even set these nuggets aside for use in a sequel, or as a special treat for newsletter subscribers. 

Further, look for opportunities for your world building work double time as characterization. What is your protagonist’s relationship to their world? How does this influence their feelings toward the world’s systems? Do they have special memories or associations with certain foods, places, or rituals? For better or for worse, this will color their perspective and how they move through the story’s world. This should be evident in the way world is described through the character’s perspective.

 Your world is, in many ways, a character as dynamic as your protagonist and supporting cast. It should shift and evolve as the story develops, too! “Game of Thrones” offers an excellent example of this: as winter draws near, so too does the looming threat of the white walkers. The world itself is a ticking clock on the story as it unfolds, and impacts everything taking place across its vast set of characters.

The greatest fictional worlds tell us about ourselves

 The world you create doesn’t just tell readers about your story, characters and the adventures you send them on. It also reveals important things about the real world, too—whether it resembles this one closely, or appears vastly different on its surface. Every story offers not just an escape, but also a mirror. 

How do you see the world? What do you have to say about it? What troubles you about it? Even if you don’t set out with the intent to take on these major questions, as an author, your take on these big questions is sure to seep into every aspect of your world. 

The more thought and imagination you’re able to offer to bring your world to life, the more clearly these messages and themes will reach your readers.

This is an updated version of a story that was previously published. We update our posts as often as possible to ensure they’re useful for our readers.

Photo via Vitalii Bashkatov/ Shutterstock