Finally, a New Windows Phone: Windows 11 on Arm Successfully Installed on Android


A Reddit user, anh0l, has reported successfully installing Windows 11 on Arm on his Xiaomi Poco X3.

The device can run both Android and Windows 11. It has a split partition that allows for this dual-boot setup. The system also supports the phone’s 120Hz display and works with a Bluetooth mouse.

But the Reddit user said that the right side of the touch screen faced inverted controls. The device reaches temperatures of approximately 48°C (118.4°F) during operation, and the virtualization features, including VM applications and Windows Subsystem for Linux, are non-functional, as reported by Tom’s Hardware.

Despite these limitations, the user reports that the experience “runs flawlessly” for basic tasks. The system can perform light productivity work, like creating basic animations in Blender 3.6 LTS. Counter-Strike: Global Offensive runs at up to 30fps, but Counter-Strike 2 is unplayable

The compatibility comes down to Windows on Arm being built for Qualcomm’s Snapdragon chips, found in Windows Arm laptops and Android devices. The Xiaomi Poco X3 Pro, with its Snapdragon 860, makes it work.

As Microsoft and Qualcomm continue to improve Windows 11 on Arm, future iterations may see enhanced compatibility and performance on mobile devices.

We want you to know that this is not an officially supported configuration and comes with limitations and risks to hardware.

Automattic acquires WPAI, a startup that creates AI solutions for WordPress


WordPress hosting company Automattic said Monday that it is acquiring WPAI, a startup that builds AI solutions for WordPress, at an undisclosed price.

WPAI has some products such as CodeWP, a tool to use AI to create WP Plugins; AgentWP, an AI assistant for WordPress site builders; and WP Chat, which is an AI-powered chat for WordPress-related questions. WPAI noted on its blog that CodeWP and AgentWP will be discontinued in its current avatar and will be integrated within Automattic’s offering eventually.

Automattic noted that as part of the acquisition, the founding team will be joining the company to lead the efforts of AI features for WordPress.

“They’ll be working on testing, building, and integrating innovative AI solutions into the core ecosystem to redefine how users and developers work with WordPress,” Automattic said in an announcement.

Automattic’s CEO Matt Mullenweg also separately announced the acquisition on his personal blog.

On its blog, WPAI said that the company’s focus will be on creating applied AI solutions for the WordPress ecosystem.

“This includes developing AI standards for WordPress, improving the platform’s core functionality, and creating tools that help users build and manage better websites. We’ll work closely with the WordPress community to thoughtfully implement these improvements while maintaining open-source values.,” the company said.

Over the past few years, Automattic has already launched a few AI tools to help users write better and succinct posts. Post the new acquistion, the startup will possibly focus on creating AI-powered developer and site building tools.

WPAI acquisition is Automattic’s second acquisition in two months. Last month, the company snapped up a Grammarly competitor for developers called Harper, which checks grammar locally on the device.

Both Automattic and Mullenweg are involved in a legal battle with rival WordPress hosting site WP Engine. The latter has accused Mullenweg of anti-competitive behavior. On the other hand, Mullenweg and Automattic have argued that WP Engine infringed the “WordPress” trademark and didn’t contribute enough to the ecosystem. The judge in the case indicated last month that the court would pass some primary injunction. However, specifics of the order have to be ironed out.

Steam Deck Cloud Sync Error: Quick Fix


Steam Deck is a popular gaming device that is palm-friendly and has pc gamers using it as well. All games that are usually found on pc can be downloaded on Steam which uses Steam Cloud Sync to access, share, and download the games. However, some users are facing the Cloud Sync Error while using Valve‘s gaming platform

If you are one of the few facing this issue with Syncing in the Cloud, here is a guide you can follow to fix it. In this article, we will understand the Cloud Sync Error and how to effectively solve it so you can go back to playing your favorite game without any interruptions. 

 

What is Steam Deck Cloud Sync Error?

As we know, gamers can save their progress through cloud sync so when they resume it automatically starts from where it was left off. This requires synchronization between Steam Deck and Steam Cloud. However, on saving, if this synchronization does not occur on the Steam server due to platform mismatch, server issues, or corruption in the game file, it leads to a cloud sync error. 

 

Why Am I Seeing This Error?

The most common reason for facing the Steam cloud error is because of a device change. Generally, if you have saved your game progress on one pc with cloud sync and then access another pc to launch Steam and continue the game, you might face the cloud sync error. This occurs when the Cloud synchronization has been turned off. Therefore, the Steam Cloud has different progress saved, which throws the error. 

Other possible reasons for facing this issue might be because of incorrect configuration, corrupted or missing game files, an outdated Steam operating system, an older version of the server, or issues with the Steam server. But there is no need to worry, as we have a few ways that you can use to fix the issue and head back to your game with no more interruptions.

Be careful that you do not click launch anyway if you are facing this issue as your game progress will be lost. Try fixing the issue before clicking this option.

 

How Can I Fix the Steam Deck Cloud Sync Error?

There are a few ways that you can follow and fix the error of cloud sync.

 

1. Update your Steam Operating System to the Latest Version

One way that you can fix this issue is by manually checking if there are any pending updates on your Steam server. You can do this by following the steps below.

Step 1: Click on the Steam menu in the top left corner of your screen

Step 2: Click on ‘Check for Steam Client Updates’. 

Step 3: Complete all the pending updates for your Steam server

When your Steam server is up to date, it will show you this popup message.

2. Check if Your Steam is Online

Sometimes it may be that your Steam server is not connected, and it is appearing offline. In this case, you may have to Go Online for the Steam server status to be updated. This can be possible by following these easy steps:

Step 1: Click Steam on the top-left corner of your screen.

Step 2: Click on ‘Go Online’ a popup tab will open up.

Step 3: Click on ‘Restart and Go Online’ 

Wait till your Steam server restarts and you are back online!

 

3. Enable the Cloud Synchronization Option

If the issue still persists, then when you switch servers or pc, in order to continuously update your game progress, you can enable cloud synchronization on your device. Follow these steps to fix the Steam cloud error.

Step 1: Click on Steam menu in the top-left corner of your screen. 

Step 2: Click on ‘Settings’ or ‘Preferences’

Step 3: Go to the ‘Cloud’ Option

Step 4: At the bottom of the popup tab, tick the checkbox with ‘Enable Steam Cloud Synchronization for applications which support it’.

Step 5: Go to the game properties by right-clicking on the specific game you want to save and click on Local Files.

Step 6: Now click ‘verify integrity of game files‘ and wait till it integrates completely.

All done! Now all your game progress will be saved across all your devices.

 

4. Repair Folder in Steam’s Library

If the above solution didn’t fix the problem, you can repair your game folder in the Steam Library Folder by following the steps below.

Step 1: Click on the Steam menu in the top left corner of your screen.

Step 2: Click on ‘Settings’ 

Step 3: Go to the ‘Downloads‘ Option

Step 4: Click on ‘Steam Library Folders‘ and you will see a new popup window with three horizontal dots on the top right.

Step 5: Click on these dots and select ‘Repair Folder‘ and wait till the process is complete.

Now you can easily sync your Steam cloud and get rid of the Steam cloud error

 

5. Check your Internet Connection

Sometimes, the solution can be as simple as ensuring that you are connected to the internet. Whether you are on your laptop or pc, ensure that your internet connection is active and that it is meeting the requirements for the cloud synchronization feature to run so that your game can be saved automatically.

 

6. Restart Steam on your gadget

Another simple solution that you can apply is by restarting Steam on your device by following these steps:

Step 1: Click on the Steam button and click on Exit

Step 2: Now you can reopen your Steam.

or

Step 1: Open your task manager on your computer and end the Steam processes by ending tasks.

Step 2: Now, launch Steam by running it as an administrator. It will restart and your error will be fixed.

 

7. Check your Windows Firewall Settings

It may be that your Firewall is not allowing Steam to interact with the cloud and blocking it. Hence, it is important to check if the Windows firewall is causing this issue. You can temporarily disable the option of Windows firewall on your system and check if the issue is being solved. 

If this was the issue, and you do not want to disable the firewall, then you can allow Steam as an exception to allow exchanges through the firewall by following these steps:

Step 1: Open your settings on your device and go to Windows Firewall

Step 2: Here you can allow a feature or an app through the firewall.

Step 3: Click on change settings and enable public and private connections for your Steam app

 

Conclusion

One of these steps will definitely fix the Steam Deck Cloud Sync Error and you can continue playing your game with no more interruptions. If you are still facing this issue after trying all the steps, you can contact the support team of Steam and they will help you out with the problem.

The Galaxy S25 Ultra could bump up RAM in its higher-end models


What you need to know

  • Leaks hint at RAM and storage upgrades for the Galaxy S25 Ultra, with some models getting more RAM than before.
  • The base model will likely have 12GB of RAM and 256GB storage, while premium versions could offer 16GB RAM and up to 1TB of storage.
  • With AI apps becoming more demanding, the RAM boost is likely aimed at keeping up with these trends.

A fresh leak hints at some exciting RAM and storage upgrades for Samsung’s upcoming Galaxy S25 Ultra. It looks like certain models could pack more RAM than their predecessors.

A recent post by @Jukanlosreve on X suggests the top-tier Galaxy S25 model might bring a RAM boost for its higher-end models. The base version will likely stick to 12GB of RAM and 256GB of storage, but the premium variants could offer 16GB of RAM with storage options up to 512GB or 1TB (via Android Police).

What Is Artificial Intelligence? From Generative AI to Hardware, What You Need to Know


To many, AI is just a horrible Steven Spielberg movie. To others, it’s the next generation of learning computers—or a source of endless empty hype. What is artificial intelligence, exactly? The answer depends on who you ask.

Broadly, artificial intelligence (AI) is the combination of computer software, hardware, and robust datasets deployed to solve some kind of problem. What distinguishes a neural net from conventional software is its structure: A neural net’s code is written to emulate some aspect of the architecture of neurons or the brain.

AI vs. Neural Nets vs. Deep Learning vs. Machine Learning

The difference between a neural net and an AI is often a matter of semantics more than capabilities or design. For example, OpenAI’s powerful ChatGPT chatbot is a large language model built on a type of neural net called a transformer (more on these below). It’s also justifiably called an AI unto itself. A robust neural net’s performance can equal or outclass a narrow AI.

Artificial intelligence has a hierarchical relationship to machine learning, neural networks, and deep learning.


Credit: IBM

IBM puts it like this: “[M]achine learning is a subfield of artificial intelligence. Deep learning is a subfield of machine learning, and neural networks make up the backbone of deep learning algorithms. The number of node layers, or depth, of neural networks distinguishes a single neural network from a deep learning algorithm, which must have more than three [layers].”

The relationships between AI, neural nets, and machine learning are often discussed as a hierarchy, but an AI isn’t just several neural nets smashed together, any more than Charizard is three Charmanders in a trench coat. There is much overlap between neural nets and artificial intelligence, but the capacity for machine learning can be the dividing line. An AI that never learns isn’t very intelligent at all.

What Is an AI Made Of? 

No two AIs are the same, but big or small, an AI’s logical structure has three fundamental parts. First, there’s a decision process: usually an equation, a model, or software commands written in programming languages like Python or Common Lisp. Second, there’s an error function, some way for the AI to check its work. And third, if the AI will learn from experience, it needs some way to optimize its model. Many neural networks do this with a system of weighted nodes, where each node has a value and a relationship to its network neighbors. Values change over time; stronger relationships have a higher weight in the error function.

Commercial AIs typically run on server-side hardware, but client-side and edge AI hardware and software are becoming more common. AMD launched the first on-die NPU (Neural Processing Unit) in early 2023 with its Ryzen 7040 mobile chips. Intel followed suit with the dedicated silicon baked into Meteor Lake. Less common but still important are dedicated hardware neural nets, which run on custom silicon instead of a CPU, GPU, or NPU.

How Does an Artificial Intelligence Learn?

When an AI learns, it’s different than just saving a file after making edits. To an AI, getting smarter involves machine learning.

Machine learning takes advantage of a feedback channel called “back-propagation.” A neural net is typically a “feed-forward” process because data only moves in one direction through the network. It’s efficient but also a kind of ballistic (unguided) process. In back-propagation, however, later nodes get to pass information back to earlier nodes.

Not all neural nets perform back-propagation, but for those that do, the effect is like panning or zooming a viewing frame on a topographical map. It changes the apparent lay of the land. This is important because many AI-powered apps and services rely on a mathematical tactic known as gradient descent. In an x vs. y problem, gradient descent introduces a z dimension. The terrain on that map forms a landscape of probabilities. Roll a marble down these slopes, and where it lands determines the neural net’s output. Steeper slopes constrain the marble’s path with greater certainty. But if you change that landscape, where the marble ends up can change. 

Supervised vs. Unsupervised Learning

We also divide neural nets into two classes, depending on the problems they can solve. In supervised learning, a neural net checks its work against a labeled training set or an overwatch; in most cases, that overwatch is a human. For example, SwiftKey is a neural net-driven mobile keyboard app that learns how you text and adjusts its autocorrect to match. Pandora uses listeners’ input to classify music to build specifically tailored playlists. And in 3blue1brown’s excellent explainer series on neural nets, he discusses a neural net using supervised learning to perform handwriting recognition.

Neither supervised nor unsupervised learning is necessarily better. Supervised learning is terrific for fine accuracy on an unchanging set of parameters, like alphabets. Unsupervised learning, however, can wrangle data with changing numbers of dimensions. (An equation with x, y, and z terms is a three-dimensional equation.) Unsupervised learning tends to win with small datasets. It’s also good at noticing subtle things we might not even know to look for. Ask an unsupervised neural net to find trends in a dataset, and it may return patterns we had no idea existed. 

What Is a Transformer?

Transformers are a versatile kind of AI capable of unsupervised learning. They can integrate many different data streams, each with its own changing parameters. Because of this, they’re excellent at handling tensors. Tensors, in turn, are great for keeping all that data organized. With the combined powers of tensors and transformers, we can handle more complex datasets. 

Video upscaling and motion smoothing are great applications for AI transformers. Likewise, tensors—which describe changes—are crucial to detecting deepfakes and alterations. With deepfake tools reproducing in the wild, it’s a digital arms race.

The person in this image does not exist. This is a deepfake image created by StyleGAN, Nvidia’s generative adversarial neural network.

The person in this image does not exist. This is a deepfake image created by StyleGAN, Nvidia’s generative adversarial neural network.
Credit: Nvidia

Video signal has high dimensionality, or “bit depth.” It’s made of a series of images, which are themselves composed of a series of coordinates and color values. Mathematically and in computer code, we represent those quantities as matrices or n-dimensional arrays. Helpfully, tensors are great for matrix and array wrangling. DaVinci Resolve, for example, uses tensor processing in its (Nvidia RTX) hardware-accelerated Neural Engine facial recognition utility. Hand those tensors to a transformer, and its powers of unsupervised learning do a great job picking out the curves of motion on-screen—and in real life. 

Tensor, Transformer, Servo, Spy

That ability to track multiple curves against one another is why the tensor-transformer dream team has taken so well to natural language processing. And the approach can generalize. Convolutional transformers—a hybrid of a convolutional neural net and a transformer—excel at image recognition in near real-time. This tech is used today for things like robot search and rescue or assistive image and text recognition, as well as the much more controversial practice of dragnet facial recognition, à la Hong Kong.

The ability to handle a changing mass of data is great for consumer and assistive tech, but it’s also clutch for things like mapping the genome and improving drug design. The list goes on. Transformers can also handle different dimensions, more than just the spatial, which is useful for managing an array of devices or embedded sensors—like weather tracking, traffic routing, or industrial control systems. That’s what makes AI so useful for data processing “at the edge.” AI can find patterns in data and then respond to them on the fly.

What Is a Large Language Model?

Large language models (LLMs) are deep learning software models that attempt to predict and generate text, often in response to a prompt delivered in natural language. Some LLMs are multimodal, which means that they can translate between different forms of input and output, such as text, audio, and images. Languages are huge, and grammar and context are difficult, so LLMs are pre-trained on vast arrays of data. One popular source for training data is the Common Crawl: a massive body of text that includes many public-domain books and images, as well as web-based resources like GitHub, Stack Exchange, and all of Wikipedia.

What Is Generative AI?

The term “generative AI” refers to an AI model that can create new content in response to a prompt. Much of the conversation around generative AI these last 16 months has focused on chatbots and image generators. However, generative AI can create other types of media, including text, audio, still images, and video.

Generative AI produces photorealistic images and video, and blocks of text that can be indistinguishable from a response written by a human. This is useful, but it comes with some caveats. AI is prone to mistakes, and the results of an AI search can be outdated since they may only have access to the body of data the AI trained on. The problem is called hallucination, and it’s a consequence inherent to the open-ended creative process by which generative AI does its work. Developers usually include controls to make sure a generative AI doesn’t give output that could cause problems or lead to harm, but sometimes things slip through.

For example, Google’s AI-powered search results were widely criticized in the summer of 2024 after the service gave nonsense or dangerous answers, such as telling a user to include glue in a pizza recipe to help the cheese stick to the pizza, or suggesting that geologists recommend people eat at least one rock per day. On its splash page, Copilot (formerly Bing Chat) advises the user that “Copilot uses AI. Check for mistakes.”

Most major AI chatbots and media creation services are generative AIs, many of which are transformers at heart. For example, the ‘GPT’ in the name of OpenAI’s wildly popular ChatGPT AI stands for “generative pre-trained transformer.” Let’s look at the biggest ones below.

ChatGPT, DALL-E, Sora | OpenAI

ChatGPT is an AI chatbot based on OpenAI’s proprietary GPT-4 large language model. As a chatbot, ChatGPT is highly effective—but its chatbot skills barely scratch the surface of what this software can do. OpenAI is training its model to perform sophisticated mathematical reasoning, and the company offers a suite of developer API tools by which users can interface their own services with ChatGPT. Finally, through OpenAI’s GPT Store, users can make and upload their own GPT-powered AIs. Meanwhile, DALL-E allows the creation of multimedia output from natural-language prompts. Access to DALL-E3, its most recent generation, is included with the paid tiers of service for ChatGPT.

Sora is the most recently unveiled service; it’s a text-to-video creator that can create video from a series of still images, extend the length of a video forward after its end or backward from its starting point, or generate video from a textual prompt. Its skill in performing these tasks isn’t yet ironclad, but it’s still an impressive display of capability.

CoPilot | Microsoft

Microsoft CoPilot is a chatbot and image generation service the company has integrated into Windows 11 and backported to Windows 10. The service was initially branded as Bing Chat, with image creation handled as a different service. That’s not the case anymore; the Microsoft AI image generation tool, originally called Image Creator, is now accessible from the same CoPilot app as the chatbot. Microsoft has developed several different chatbots and wants to sell AI services as a subscription to commercial customers and individuals.

Gemini | Google

Gemini (formerly Bard), Google’s generative AI chatbot, is a multimodal LLM based on Google’s LaMDA (Language Model for Dialogue Applications). Like other LLMs trained on data from the internet, Gemini struggles with bias inherited from its Common Crawl roots. But its strength is perhaps better demonstrated through its ability to juggle information from multiple different Google services; it shines with productivity-focused tools like proofreading and travel planning.

Grok | xAI

Available to paying X subscribers, the Grok AI chatbot is more specialized than other LLMs, but it has a unique feature: Because it’s the product of xAI, Elon Musk’s AI startup, it enjoys near real-time access to data from X (formerly Twitter). This gives the chatbot a certain je ne sais quoi when it comes to analyzing trends in social media, especially with an eye to SEO. Musk reportedly named the service, because he felt that the term “grok” was emblematic of the deep understanding and helpfulness he wanted to instill in the AI.

Midjourney | Midjourney Inc.

Midjourney is an image-generating AI service with a unique perk (or restriction, depending on your use case): It’s only accessible via Discord. Billed as an aid for rapid prototyping of artwork before showing it to clients, Midjourney rapidly entered use as an image creation tool of its own right. It’s easy to see why: the viral “Pope Coat” image of early 2023 was created using Midjourney.

Pope Francis in a puffy winter jacket


Credit: Public domain

Midjourney’s image-creation talents are at the top of the heap, with a caveat. The AI’s eponymous parent company has spent nearly two years in court over its alleged use of copyrighted source material in training Midjourney.

What Is AGI? 

AGI stands for artificial general intelligence. Straight out of an Asimov story about the Three Laws of Robotics, AGI is like a turbo-charged version of an individual AI, capable of human-like reasoning. Today’s AIs often require specific input parameters, so they are limited in their capacity to do anything but what they were built to do. But in theory, an AGI can figure out how to “think” for itself to solve problems it hasn’t been trained to solve. Some researchers are concerned about what might happen if an AGI were to start drawing conclusions we didn’t expect.

In pop culture, when an AI makes a heel turn, the ones that menace humans often fit the definition of an AGI. For example, Disney/Pixar’s WALL-E followed a plucky little trashbot who contends with a rogue AI named AUTO. Before WALL-E’s time, HAL and Skynet were AGIs complex enough to resent their makers and powerful enough to threaten humanity. Imagine Alexa, but smart enough to be a threat, with access to your entire browser history and checking account.

What Does AI Have to Do With the Brain?

Many definitions of artificial intelligence include a comparison to the brain, whether in form or function. Some take it further, zeroing in on the human brain; Alan Turing wrote in 1950 about “thinking machines” that could respond to a problem using human-like reasoning. His eponymous Turing test is still a benchmark for natural language processing. Later, however, Stuart Russell and John Norvig observed that humans are intelligent but not always rational.

As defined by John McCarthy in 2004, artificial intelligence is “the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”

Russell and Norvig saw two classes of artificial intelligence: systems that think and act rationally versus those that think and act like a human being. But there are places where that line begins to blur. AI and the brain use a hierarchical, profoundly parallel network structure to organize the information they receive. Whether or not an AI has been programmed to act like a human, on a very low level, AIs process data in a way common to not just the human brain but many other forms of biological information processing.

Neuromorphic Systems

A neural net is software, designed to emulate the multi-layered parallel processing of the human brain. But on the hardware side of the equation, there are neuromorphic systems, which are built using a type of specialized and purpose-built hardware called an ASIC (application-specific integrated circuit). Not all ASICs are neuromorphic designs, but neuromorphic chips are all ASICs. Neuromorphic design fundamentally differs from CPUs and only nominally overlaps with a GPU’s multi-core architecture. But it’s not some exotic new transistor type, nor any strange and eldritch kind of data structure. It’s all about tensors. Tensors describe the relationships between things; they’re a kind of mathematical object that can have metadata, just like a digital photo has EXIF data.

Tensors figure prominently in the physics and lighting engines of many modern games, so it may come as little surprise that GPUs do a lot of work with tensors. Modern Nvidia RTX GPUs have a huge number of tensor cores. That makes sense if you’re drawing moving polygons, each with some properties or effects that apply to it. Tensors can handle more than just spatial data, and GPUs excel at organizing many different threads at once.

But no matter how elegant your data organization might be, to run on a CPU, it must filter through multiple layers of software abstraction before it becomes binary. Intel’s neuromorphic chip, Loihi 2, takes a very different approach.

Loihi 2

Loihi 2 is a neuromorphic chip that comes as a package deal with a compute framework named Lava. Loihi’s physical architecture invites—almost requires—the use of weighting and an error function, both defining features of AI and neural nets. The chip’s biomimetic design extends to its electrical signaling. Instead of ones and zeroes, on or off, Loihi “fires” in spikes with an integer value capable of carrying much more data. For better or worse, this approach more closely mirrors the summative potentials that consensus-taking neurons use to decide whether or not to fire.

Loihi 2 is designed to excel in workloads that don’t necessarily map well to the strengths of existing CPUs and GPUs. Lava provides a common software stack that can target neuromorphic and non-neuromorphic hardware. The Lava framework is explicitly designed to be hardware-agnostic rather than locked to Intel’s neuromorphic processors.

Intel’s Loihi and Loihi 2 architectures


Credit: Intel/Intel Labs

Machine learning models using Lava can fully exploit Loihi 2’s unique physical design. Together, they offer a hybrid hardware-software neural net that can process relationships between multiple entire multi-dimensional datasets, like an acrobat spinning plates. According to Intel, the performance and efficiency gains are largest outside the common feed-forward networks typically run on CPUs and GPUs today. In the graph below, the colored dots towards the upper right represent the highest performance and efficiency gains in what Intel calls “recurrent neural networks with novel bio-inspired properties.”

Feed-forward-only architectures are limited compared to neural net architectures that can take advantage of feedback.


Credit: Intel/Intel Labs

Intel hasn’t announced Loihi 3, but the company regularly updates the Lava framework. Unlike conventional GPUs, CPUs, and NPUs, neuromorphic chips like Loihi 1/2 are more explicitly aimed at research. The strength of neuromorphic design is that it allows silicon to perform a type of biomimicry. Brains are extremely cheap, in terms of power use per unit throughput. The hope is that Loihi and other neuromorphic systems can mimic that power efficiency to break out of the Iron Triangle and deliver all three: good, fast, and cheap. 

IBM NorthPole

IBM’s NorthPole processor is distinct from Intel’s Loihi in what it does and how it does it. Unlike Loihi or IBM’s earlier TrueNorth effort in 2014, Northpole is not a neuromorphic processor. NorthPole relies on conventional calculation rather than a spiking neural model, focusing on inference workloads rather than model training. What makes NorthPole special is the way it combines processing capability and memory. Unlike CPUs and GPUs, which burn enormous power just moving data from Point A to Point B, NorthPole integrates its memory and compute elements side by side.

According to Dharmendra Modha of IBM Research, “Architecturally, NorthPole blurs the boundary between compute and memory,” Modha said. “At the level of individual cores, NorthPole appears as memory-near-compute and from outside the chip, at the level of input-output, it appears as an active memory.” IBM doesn’t use the phrase, but this sounds similar to the processor-in-memory technology Samsung was talking about a few years back.

IBM NorthPole

IBM
Credit: IBM’s NorthPole AI processor.

NorthPole is optimized for low-precision data types (2-bit to 8-bit) as opposed to the higher-precision FP16 / bfloat16 standard often used for AI workloads, and it eschews speculative branch execution. This wouldn’t fly in an AI training processor, but NorthPole is designed for inference workloads, not model training. Using 2-bit precision and eliminating speculative branches allows the chip to keep enormous parallel calculations flowing across the entire chip. Compared with an Nvidia GPU manufactured on the same 12nm process, NorthPole was reportedly 25x more energy efficient. IBM reports it was 5x more energy efficient. 

NorthPole is still a prototype, and IBM has yet to say if it intends to commercialize the design. The chip doesn’t neatly fit into any of the other buckets we use to subdivide different types of AI processing engines. Still, it’s an interesting example of companies trying radically different approaches to building a more efficient AI processor.

AI on the Edge vs. for the Edge

Not only does everyone have a cell phone, but everything seems to have a Wi-Fi chip and an LCD. Embedded systems are on the ascent. This proliferation of devices gives rise to an ad hoc global network called the Internet of Things (IoT). In the parlance of embedded systems, the “edge” represents the outermost fringe of end nodes within the collective IoT network.

Edge intelligence takes two primary forms: AI on edge and AI for the edge. The distinction is where the processing happens. “AI on edge” refers to network end nodes (everything from consumer devices to cars and industrial control systems) that employ AI to crunch data locally. “AI for the edge” enables edge intelligence by offloading some compute demand to the cloud. 

In practice, the main differences between the two are latency and horsepower. Local processing will always be faster than a data pipeline beholden to ping times. The tradeoff is the computing power available server-side.

Embedded systems, consumer devices, industrial control systems, and other end nodes in the IoT all add up to a monumental volume of information that needs processing. Some phone home, some have to process data in near real-time, and some have to check and correct their work on the fly. Operating in the wild, these physical systems act just like the nodes in a neural net. Their collective throughput is so complex that, in a sense, the IoT has become the AIoT—the artificial intelligence of things.

None of Us Is As Dumb As All of Us

The tech industry has a reputation for rose-colored lenses, and to a degree, it has earned its optimism. As devices get cheaper, even the tiny slips of silicon that run low-end embedded systems have surprising computing power. But having a computer in a thing doesn’t necessarily make it smarter. Everything’s got Wi-Fi or Bluetooth now. Some of it is really cool. Some of it is made of bees. Mostly, its strength is in analytics. If I forget to leave the door open on my front-loading washing machine, I can tell it to run a cleaning cycle from my phone. But the IoT is already a well-known security nightmare. Parasitic global botnets exist that live in consumer routers. Hardware failures can cascade, like the Great Northeast Blackout of the summer of 2003 or when Texas froze solid in 2021. We also live in a timeline where a faulty firmware update can brick your shoes. None of these things compare to the widespread monetization of passively collected user data sold to third parties for advertising purposes.

There’s a common pipeline (hypeline?) in tech innovation. When a Silicon Valley startup invents a widget, it goes from idea to hype train to widgets-as-a-service to market saturation and disappointment, before finally figuring out what the widget is good for. It may not surprise you to see that generative AI, which sometimes seems to miss as often as it hits, appears to be descending into its trough of disillusionment.

Generative AI, while powerful, sometimes seems to stumble as much as it finds its stride.


Credit: Gartner

This is why we lampoon the IoT with loving names like the Internet of Shitty Things and the Internet of Stings. (Internet of Stings devices communicate over TCBee-IP.) But the AIoT isn’t something anyone can sell. It’s more than the sum of its parts. The AIoT is a set of emergent properties that we have to manage if we’re going to avoid an explosion of splinternets, and keep the world operating in real time.

What Is Artificial Intelligence? TL;DR

In a nutshell, artificial intelligence is often the same as a neural net capable of machine learning. They’re both software that can run on whatever CPU or GPU is available and powerful enough. Neural nets use weighted nodes to represent relationships, and often have the power to perform machine learning via back-propagation.

There’s also a kind of hybrid hardware-and-software neural net that brings a new meaning to “machine learning.” It’s made using tensors, ASICs, and neuromorphic engineering meant to mimic the organization of the brain. Furthermore, the emergent collective intelligence of the IoT has created a demand for AI on, and for, the edge. Hopefully, we can do it justice.

Your Complete CRM Handbook – Tech Research Online


In today’s fast-paced business world, maintaining strong customer relationships is the key to sustainable growth and profitability.

Customer Relationship Management (CRM) systems have emerged as the backbone of successful enterprises, empowering organisations to streamline operations, enhance customer experiences, and foster long-lasting loyalty.

In this comprehensive handbook, you will discover:

  • The tell-tale signs that your business needs a CRM solution to thrive.
  • Practical strategies to boost productivity and maximise sales efficiency.
  • A step-by-step guide to crafting a winning CRM strategy aligned with your business objectives.
  • Best practices for maximizing your return on investment (ROI) from CRM implementation.
  • Insights into leveraging cutting-edge technologies like AI and mobile integration for unparalleled customer experiences.

Paytm sells PayPay stake to SoftBank for $279.2 million


Paytm has agreed to sell its stake in Japanese payments firm PayPay to SoftBank for $279.2 million, as the Indian firm sheds non-core assets following a bruising regulatory clampdown earlier this year.

The sale of Paytm’s stake in PayPay, which it received through acquisition rights six years ago, follows months of restructuring at the Indian firm that saw the company sell its entertainment ticketing unit to Zomato for $246 million in August.

PayPay, controlled by SoftBank and Yahoo Japan parent Z Holdings, is a leading payments app in Japan.

The stake sale will boost Paytm’s cash reserves to $1.46 billion as it attempts to recover market share in India’s fiercely competitive payments market. The company’s banking affiliate was severely restricted by regulators in January, leading to an exodus of customers to rival services.

Shares in Paytm have nearly tripled since June after India’s payments regulator allowed it to resume adding customers to its flagship UPI service. The company reported its first quarterly profit in September, though this was largely due to proceeds from asset sales rather than operational improvements.

“We are grateful to Masayoshi-san and the PayPay team for giving us the opportunity to together create a mobile payment revolution in Japan,” Paytm said in a statement. “We remain fully committed and will continue to support PayPay’s product and technology innovations in future. We are working on introducing new AI-powered features to accelerate PayPay’s vision in Japan.”

Saturday’s deal marks the end of Paytm’s relationship with SoftBank, which divested its remaining shares in June after being an early backer through its Vision Fund.

Changing Default Font For Google Docs


Do you have a preferred font that you use for all documents? If you do, switching from Arial size 11 to Arial size 12 every time you start a new google docs document might be quite inconvenient but you can resolve the issue in a few clicks.

Google Docs is a popular word-processing program that is used by consumers and businesses all around the world. While Google Docs provides a variety of font styles and sizes, you may discover that the normal text option does not meet your needs or preferences. Changing the default font in Google Documents can help you save time and improve your writing experience overall. 

This article will take you step by step through the process of changing the default font in Google Documents. This article will help you personalize your papers to match your needs, whether you want a more professional style, better readability, or simply a change of pace.

You can also download custom fonts for Google Docs. 

 

Why Should You Change Default Font in Google Docs?

There are many advantages to changing Google Documents’ default style. 

  • First off, it can assist you in producing a document that has a more classy look and is consistent with your personal or company branding. 
  • Second, altering the font might make your text easier to read for both you and your audience by increasing readability and accessibility.
  • Lastly, it enables you to communicate your distinct style and preferences, which can give your work a more personal touch. 

Finally, using a different font as the default might save you time by preventing the need to manually change the font each time a new document is opened.

 

How to Change the Default Font in Google Docs: A Step-by-step Guide

Changing the Font for a Single Document

The steps below can be used to change the font for just one document in Google Docs:

  1. Open the Google Documents file where the font has to be changed.
  2. To select all of the text in the document, click Edit text box and “Select All,” or simply highlight the text you wish to change the font of.
  3. At the top toolbar, select a font from the drop-down menu. This shows the current font and is situated next to the font size menu.
  4. You can either use the search bar to enter the font name or scroll through the available selections in the font menu. Click on the desired font when you’ve located it.
  5. The new font style should now be added to the selected text.

You can highlight only the text you wish to alter and then follow the same instructions if you only want to change the font for a portion of the page. A similar technique can be used to modify the font’s color and size.

It’s crucial to understand that changing the default font for one document will not affect all subsequent ones.

 

Changing the Font for All Future Documents

The steps below can be used to alter the Google Docs’ new default font for all upcoming documents:

  1. Launch your document.
  2. Choose some normal text.
  3. Choose your default font by clicking the Font button on the top of the window.
  4. Choose Format.
  5. Select paragraph styles from the list.
  6. Click Update normal text to match after selecting Normal text.
  7. Choose Format once more.
  8. Choose from a variety of paragraph styles.
  9. Choose Settings, then Save as my default styles and press enter to apply to all new sheets.

Also changing the default font style on a shared Google Documents account will impact all users on that account. Any modifications to the default font should be discussed with other users first.

 

Things to Consider While Selecting a Font in Google Docs

There are a few things to consider while selecting a font to ensure that you get the perfect one for your document:

  • Evaluate the document’s purpose and tone: The font you choose should complement the document’s tone and purpose. A professional document, such as a legal contract, may necessitate a more classic serif font, whilst a creative endeavor may benefit from a more current sans-serif font.
  • Consider readability: The typeface you select should be easy to read and should not create eye strain. Fonts with uncommon or complicated letterforms should be avoided because they can make the text difficult to understand.
  • Consider the audience: If your paper is intended for a certain group, use a font that is suited for that group. For example, if you’re preparing a document for a children’s book, you might want to select a lively and easy-to-read font.
  • Be consistent: When using various fonts in a document, use them consistently. Limit yourself to no more than two or three fonts and utilize them consistently throughout the document.
  • Try the font: Before finalizing your font selection, try it out in a sample document. Make sure it looks excellent in different sizes and is easy to read. 

 

Change the Default Font in Google Sheets 

Did you know that you can also change the default font in Google Sheets? Just like in Google Docs, you can set your preferred font as the default font for all future spreadsheets. Simply go to the “File” menu, select “Spreadsheet settings,” and then choose your desired font from the “Font family” font dropdown menu in the tab. Click “Save settings” to apply the changes.

Changing the default font in entire spreadsheets can help streamline your workflow and ensure consistency across all your spreadsheets. So, give it a try and see how it works for you!

 

Conclusion

Finally, changing the default font in Google Documents can help you save time while also improving the appearance of your documents. Changing the font for a single document or all future documents is a straightforward process that can be performed in a few clicks.

You can quickly change the default font in Google Documents to your favorite font by following the step-by-step process explained in this article. We also shared some pointers on selecting the correct font to ensure that your documents are both visually appealing and easy to read. So, give it a shot and see how it goes.

Venus Was Never Able to Support Earth-Like Life: Analysis


Venus as seen by NASA’s Mariner 10.

Credit: NASA/JPL-Caltech

Although the Venus we know today is unfathomably hot and toxic, some researchers have long suspected that Earth’s neighboring “twin” was once capable of supporting life. As the theory goes, Venus was previously cooler and covered with liquid oceans, making it potentially hospitable to any Earth-like life forms that may have formed or arrived there. But new research out of England has rung the death knell for that particular hope. An analysis of Venus’s atmospheric makeup reveals that the planet has always been devoid of liquid water, making it inhabitable—at least to the type of life we understand. 

Questions about Venus’s potentially hospitable past began to arise in the mid-1900s, when Heinz Haber, Carl Sagan, Harold Morowitz, and other scientific heavyweights began to focus on the planet’s early atmospheric and surface conditions. They thought that before Venus’s thick atmosphere produced the runaway greenhouse effect that makes it so brutally hot, the planet’s climate was much like Earth’s. Its temperatures could have been cool enough to support life, but warm enough to maintain vast bodies of liquid water. 

To see whether Venus could have once been friendly to life, Tereza Constantinou, a PhD student at the University of Cambridge, led an analysis of the planet’s current atmosphere. In order to have produced the liquid oceans necessary to support Earth-like life, Venus’s sea of magma—which the planet possessed at the beginning of its existence—would have had to cool quickly, allowing water to condense. If this had happened, water would have become trapped in the crystallized magma and stored within the planet’s interior. What’s stored within a planet is ejected when its volcanoes erupt, which means Venus’s volcanic explosions would have spewed water into its atmosphere.

The volcano Idunn Mons on Venus.

The volcano Idunn Mons on Venus.
Credit: NASA/JPL-Caltech/ESA

Constantinou and her professors used existing atmospheric composition data to see if they could reverse-engineer a planet that once held water oceans. Instead, they found that the planet’s volcanic eruptions were “dry,” or lacking in water. This means Venus’s magma likely cooled slowly, producing steam instead of liquid water and preventing Venus from trapping water within its interior. 

This isn’t the first time humans have been disappointed by Venus’s inability to host Earth-like life. When NASA’s Mariner 2 spacecraft swung by Venus in 1962, it recorded temperatures ranging from 300 to 400 degrees Fahrenheit, as well as 20 times the atmospheric pressure found on Earth. Many people grieved the loss of the admittedly romantic yet widespread dream of finding extraterrestrial life on Venus. But Sagan, still an assistant professor at Harvard, took the opportunity to make a point that could still stand today.

“It is more likely that if there is life on Venus, it is probably of a type that we could not now imagine,” Sagan said in the 1963 NASA film The Clouds of Venus. 

Though some scientists’ Venus dreams appear to have been dashed yet again, Constantinou made the same point this week that Sagan made more than 60 years ago. “This doesn’t completely rule out any life,” she told The Guardian. “It rules out Earth-like life.”

The Pope gets his first electric Popemobile from Mercedes-Benz


Mercedes-Benz has delivered the first all-electric Popemobile to the Vatican: A modified version of the German automaker’s G-Class SUV.

Some of the biggest changes from the standard G-Class involve using the four electric motors at each wheel to carefully control the vehicle at low speeds as it travels around the Vatican grounds. There is also a dedicated height-adjustable swiveling seat so the Pope can address more of his audience.

While it’s an all new type of propulsion for the Pope’s dedicated ride, it’s not a new partner. Mercedes-Benz has built these types of vehicles for the Vatican for around 100 years, and a 2015 analysis by the Washington Post showed the automaker had built roughly one-third of all so-called Popemobiles to that point.

Mercedes-Benz says it’s been working with the Vatican on the vehicle for around a year. But it wasn’t the first company that wanted to electrify the Pope’s wheels. Now-defunct EV startup Fisker once claimed that it was working with the Vatican to create a special version of its Ocean SUV for the Pope. The validity of that claim always seemed suspect at best, though, and Fisker went bankrupt earlier this year.