The Friday Roundup – Shooting Secrets and an A.I. Comparison


YouTube shooting secrets.

Shooting Secrets Creators Don’t Show You

When you look at a lot of the videos that well known creators produce and upload, they all look pretty slick from the outside.

Because of that, when you first start out for yourself, you tend to get the idea that they are these super smooth presentation experts that never screw up!

The truth is that they, like everyone else mess up lines and takes over and over and have no inherent ability that you and I lack.

What they do have is experience through practice, repetition and the ability to smoothly edit out mistakes, again through experience and practice!

So to get things back in perspective, especially if you are new, here’s a close look at what a YouTube presentation or tutorial video really looks like in the making.


AI Background Remover vs Green Screen – PowerDirector

Now that we have moved into the era of A.I. background removal it would seem that green screen footage is no longer needed.

It would be nice to think that way but in reality the effectiveness of one over the other is still going to be determined on a case by case basis.

Although A.I. is improving in leaps and bounds it still has trouble with things like hair lines or instances where the subject is not clearly distinct from the background.

The downside for green screen is that in most cases you really have to shoot your footage very well against a very evenly lit background otherwise you won’t get a good key.

That’s why for my money I still think the green screen in PowerDirector is one of the best around at the consumer level.

It allows you to select up to three shades of green from your background to get a very accurate key which greatly reduces the need (but does not eliminate!) for pro level lighting on that background.


Seedance 2.0 or VEO 3.1 Put to the Test

Recently the people at CyberLink posted a video showing the differences between various A.I. models for creating different assets inside their interface.

This week we have something similar from Filmora and this is a good thing.

When you have access to three or four different A.I. models to create an asset or do some kind of work for you but, they cost you credits you have to buy, it’s best if you know what is what.

Each A.I. model will give you results that are quite different and each one has strengths and weaknesses depending on the type of content you want it to create.

So here’s a comparison in Filmora for the Seedance 2.0 and the VEO 3.1 models for you to check out.


Unlock the secret to creating the viral Color Wheel Trend Effect

OK, take a deep breath and get ready!

This is a tutorial from the people at Filmora covering a specific task or effect as part of an ongoing series they seem to be creating.

Now the reason for the deep breath is because of the speed at which they pump the instructions at you, this is not presented at a slow or leisurely pace!

However in spite of that, it is a solid tutorial showing how to pull off a very popular effect right now.

Having said that it is important to bear in mind that to me, the effect achieved is not the end product of the tutorial.

The real value lies in the use of the tools and techniques available to you as an editor so that you can use them in your own way.


How to Write Video Descriptions for Any Platform

These days you have to optimize every single aspect of an uploaded video if you want to have it appear in search or get presented on YouTube to an audience.

The problem has always been that in the short term, spammy “gaming the system” type strategies often worked giving rise to the idea that doing those things was beneficial.

In almost every case of those strategies, the long term effect was that the algorithms caught up, and those videos or channels ended up being cast into the wilderness!

Video descriptions are one of the more obvious examples of this and have undergone quite the evolution over the years.

Getting them right takes an understanding of what the search engines actually want while at the same time making them appealing to humans.


This Movie Poster Moves – Fusion for Noobs!

I have probably said this a million times but if you are using DaVinci Resolve as your editing software and not using the Fusion Page, you are leaving way too much on the table.

For my money (actually it’s free!) the two pages in Resolve that raise it above the crowd are the Color Page and the Fusion Page.

The problem with both of them is that they are pro level tools and were designed with pro’s in mind, which is a complicated way of saying they are really complex at first sight!

However once you understand how they work and what they can do, it’s a whole new world.

In the video below Daniel Batal takes you step by step through quite a complex exercise in Fusion but had the humanity to explain exactly what he is doing and why as he goes.

It is by far one of the better tutorials I have seen for the Fusion Page even if the project itself is not really something you would be doing, well worth the time to watch.


How Sound Changed Editing Forever – The Birth of Cinematic Sound

This looks like it is going to be a pretty interesting series from the folks at Film Editing Pro.

They usually put out very high quality content as far as tutorial and instructional videos go so I am looking forward to this one.

In this first video they cover historically the beginnings of cinematic sound which is something we all take for granted these days.


Best Times to Post Videos on Social Media in 2026

Movavi social posting image.Movavi social posting image.

Given the current levels of competition for when it comes to posting to YouTube, Instagram or whatever, you really need to know the fine details.

Of course there are the usual suspects like Titles and descriptions but the bottom line is that if you can leverage an advantage then you need to do it!

So that brings us to timing your posts to maximize the impact that post has and to make sure it has the best possible chance for distribution.


How Lighting Changes Emotion in Film (Interview vs Cinematic vs Horror)

This is a very basic but at least easy to follow exercise in lighting.

I thought it was useful to add because it shows the way different lighting setups directed at faces can dramatically change the mood being conveyed.

I see a lot of videos online where I just know the creator was going for some kind of beauty/seductive look on the subject and ended up making them look like an axe murderer!



Key Takeaways

  • Creators often face challenges while filming, as they edit out mistakes to present polished videos.
  • AI background removal technology competes with green screen, but effective use depends on specific scenarios.
  • Filmora compares A.I. models like Seedance 2.0 and VEO 3.1 to help users choose the right tool for their needs.
  • Lighting dramatically influences the mood in videos, showcasing how different setups can affect the perception of subjects.
  • Optimizing video descriptions and posting times improves visibility and engagement on platforms like YouTube and Instagram.

Visual Studio Code 1.119


Follow us on LinkedIn, X, Bluesky |


Last updated: April 30, 2026

Welcome to the 1.119 release of Visual Studio Code.

Happy Coding!



April 30, 2026

  • Add a preview button for Markdown files. #312425
  • Allow hiding the update notification button in the title bar. #311929

April 29, 2026

  • Organize Markdown settings into subcategories for better discoverability. #313363
  • Add support for indexing external ingest in virtual file systems for chat codebase context. #313281
  • Add support for attaching browser tabs to chat to share page snapshots as context. #312169
  • Add support for Copilot CLI plan mode in AHP. #312050
  • Allow agents to request access to browser tabs through a permission dialog. #297372

We really appreciate people trying our new features as soon as they are ready, so check back here often and learn what’s new.

5 Smart Questions to Ask Before Building Home in Queensland


Questions to Ask Before Building in QueenslandQuestions to Ask Before Building in Queensland
ID 52753065 ©
imagesupply | Dreamstime.com

Building a new home in Queensland is a major decision, and the best outcomes usually come from asking the right questions before plans are finalised. For growing families, upsizers, and multigenerational households, early clarity can make the difference between a home that simply looks good and one that genuinely works for the block, the budget, and everyday life.

Matching the Design to the Block and Household

One of the first smart questions to ask is whether the chosen design actually suits both the land and the people who will live in it. A home may look impressive in a brochure, but Queensland blocks can vary significantly in size, slope, orientation, and frontage, which all affect what will work well in practice.

This is often where families start weighing how different builders approach flexibility and design fit. Looking at providers such as Neptune Homes custom home builders in Queensland can help frame the broader question of whether a home is being shaped around real household needs, including storage, privacy, shared living areas, and room to adapt over time.

Understanding the Full Budget Picture

Another smart question is what the total cost is likely to include beyond the advertised base price. Many people focus first on the house itself, but site conditions can quickly change the financial picture. In Queensland, factors such as slope, access, drainage, and engineering requirements often influence the final cost more than expected.

It is worth asking specifically about soil classification, site fall, retaining needs, and possible changes to the slab design. These details affect construction complexity and can add substantial cost, so understanding them early helps set more realistic expectations before the build moves too far ahead.

Checking Local and Estate Requirements

A practical question many buyers overlook is whether the planned home will need to change to meet council or estate rules. Depending on the location, there may be restrictions around setbacks, rooflines, façades, materials, fencing, or driveway placement. These requirements can affect both layout and street presentation.

Raising this question early helps avoid disappointment later. A design that suits one block may need meaningful adjustments on another, and those changes can affect both approval timelines and budget. Knowing the local framework upfront makes the planning process more efficient and far less reactive.

Planning for the Next Stage of Family Life

A smart home build should not only reflect how a family lives today, but also how that lifestyle may change over the next five to ten years. This is an important question for families with young children, teenagers, or older relatives who may later become part of the household.

That means looking closely at how flexible the design really is. Separate living zones, adaptable rooms, practical storage, and thoughtful circulation can make a home far more functional as needs shift. Thinking in terms of livable housing design can also help prevent the layout from becoming restrictive sooner than expected.

Designing for Queensland Conditions

Just as important is the question of how well the home responds to the Queensland climate. Warm temperatures, humidity, strong sun, and seasonal weather patterns all shape how comfortable and efficient a home will feel long after construction is complete.

This is where design decisions such as orientation, shading, window placement, and ventilation become especially important. Asking how the home supports cross-ventilation, manages solar exposure, and improves day-to-day comfort can reveal whether the design is truly suited to local conditions rather than simply visually appealing.

Better Questions Lead to Better Building Decisions

The smartest questions before building in Queensland are usually the ones that reveal how the home will function in real life. When buyers take time to test the design against the block, the budget, future family needs, and Queensland conditions, they are in a much stronger position to make confident and informed decisions from the start.

Find a Home-Based Business to Start-Up >>> Hundreds of Business Listings.

Musk Concludes Testimony At OpenAI Trial


An anonymous reader quotes a report from CNBC: Elon Musk wrapped up his testimony on Thursday as the trial in his lawsuit against OpenAI CEO Sam Altman continued into its fourth day. OpenAI’s attorney, William Savitt, cross-examined Musk in the morning. He asked Musk about the capped nature of Microsoft’s investments in OpenAI, his involvement in negotiations about the company’s structure, and whether he knew about the OpenAI nonprofit’s recent initiatives. “I don’t know what’s going on at OpenAI,” Musk testified.

Savitt also asked Musk about his competing artificial intelligence startup, xAI. While not the main focus of the case, Musk said it is “partly” true that xAI used some of OpenAI’s models to train its own models, a process known as distilling. Musk also suggested that xAI has used OpenAI’s technology to help build the company. Musk sued OpenAI, Altman, and Greg Brockman, the company’s president, in 2024, alleging that they went back on their commitments to keep the artificial intelligence company a nonprofit and to follow its charitable mission. He claims that the roughly $38 million he donated to seed OpenAI, a company he co-founded, was used for unauthorized commercial purposes.

Once Musk wrapped up his testimony after roughly two hours of questioning on Thursday, his attorneys called Jared Birchall, who manages Musk’s billions at his family office, as their next witness. Birchall testified about his knowledge of Musk’s specific donations to OpenAI. Judge Yvonne Gonzalez Rogers oversaw the proceedings from federal court in Oakland, California. The trial will resume on Monday. Recap:

Elon Musk Says OpenAI Betrayed Him, Clashes With Company’s Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

Total War: Warhammer 40,000 will have destructible terrain elements: ‘That forest, if you don’t like it, you don’t have to keep it’


Total War: SHOW & TELL – YouTube
Total War: SHOW & TELL - YouTube


Watch On

One of the things on our wishlist for Total War: Warhammer 40,000 is a cover system, and the Total War: Show & Tell makes it clear that we’re going to get our wish. And also, if we don’t like that cover, we’ll be able to blow it up.

What To Build Vs. Buy In 2026


The Healthcare AI Stack: What’s Worth Building vs. Buying?

Most mid-market healthcare operations leaders have already looked at the major platforms. Epic Cheers. Veradigm. Health Catalyst. They have seen the demos. The capabilities look right. The implementation timelines look long, the price tags look like health system budget, and the fit to their actual data environment looks questionable.

The question becomes: what do you actually build, and what do you buy?

USM Business Systems works with mid-market health systems, specialty pharmacy groups, and pharma/CRO organizations to answer exactly that question. What follows is the framework we use.

Start With the Data Reality

The first thing that determines your stack is your data environment, not your budget or your timeline.

If your EHR is current, your prior auth workflow is structured, and your payer data is clean and reliable, you have more platform options. If you are managing two EHR’s from an acquisition, a prior auth process that routes through fax, and payer status updates that live in coordinator inboxes, most platforms will underdeliver.

The reason is straightforward. Enterprise healthcare AI platforms are calibrated to enterprise data infrastructure. Mid-market infrastructure is almost always messier. That is not a failure of the operations team. It is a function of how mid-market healthcare organizations grow.

A platform that assumes a clean data model will give you clean outputs in the demo and noisy outputs in production. The question to ask in every vendor evaluation: what does this platform do with dirty data?

What Platforms Are Good At?

Off-the-shelf healthcare AI platforms are strong when:

  • Your data infrastructure matches their integration assumptions
  • Your use case is standard enough that their pre-built models apply without heavy customization
  • You have internal IT capacity to manage ongoing configuration and compliance maintenance
  • Your budget and timeline can absorb a 9–18 month implementation cycle

For organizations where those conditions hold, a platform makes sense. The vendor handles model maintenance, the infrastructure, and the regulatory roadmap.

What Custom AI Agents Are Good At?

A custom healthcare AI agent is the right architecture when:

  • Your data environment is non-standard and a platform would require significant cleanup before it could run reliably
  • Your use case is specific enough that pre-built models would require heavy modification regardless
  • You want the agent trained on your actual payer mix, your authorization denial patterns, your specific formulary and patient population
  • You need deployment in weeks, not quarters

The tradeoff is that custom builds require an engineering partner with healthcare domain understanding. Generic AI development shops can build the software. They often miss the operational and compliance logic that determines whether the outputs are actually usable in a regulated environment.

A Practical Framework for the Decision

USM uses a three-question filter with every new healthcare engagement:

First: Is the problem standard or specific? A prior authorization workload at a specialty pharmacy managing oncology patients across 15 payers is not a standard problem. A platform built for median-case prior auth will give median results.

Second: How clean is the underlying data? If significant data normalization is required before a platform can run, that cleanup cost goes into the build-vs-buy calculation. Custom agents can be built to work with imperfect, fragmented data.

Third: What is the decision speed requirement? If you need operational improvements in 8–12 weeks, a platform with a 12-month implementation is not the right answer regardless of long-term fit.

The Hybrid That Works for Most Mid-Market Healthcare Teams

Most mid-market healthcare operations teams land in a hybrid. They buy infrastructure at the commodity layer (EHR, practice management, claims processing) and build custom at the intelligence layer: the agent that sits on top and synthesizes signals into decisions.

That is the architecture USM – one of the best ai app development companies in USA, deploys. The agent connects to existing systems via HL7, FHIR API, or structured data export. It does not require an EHR migration or a claims system replacement. It meets the data where it is and builds the visibility and decision layer on top.

Deployment timeline: 8–12 weeks from scoping to first output. ROI measurement starts at week one.

 

USM offers a no-cost architecture consultation for healthcare operations leaders evaluating AI options. Book a session at usmsystems.com.

 

Slay The Spire 2 Is Fantastic, So Why Is It Being Review-Bombed?



I began playing Slay the Spire 2 shortly after my colleague wrote up a piece on how its phenomenal multiplayer makes it an early game of the year contender. Within an hour of starting it, I, too, was utterly hooked. In the time since, I have poured a whopping 40 hours into Slay the Spire 2, yet even after sinking that much time into the game, I can’t get over it. I adore how fast-paced, strategic, and immensely satisfying it is, regardless of if you’re playing solo or with friends. And that first time I hit the Spire and became aware of the greater gimmick at play? That’s definitely one of my favorite gaming moments of 2026 thus far.

However, my positive experience with the game and my colleague’s glowing words fail to reflect how Slay the Spire 2 is being received on Steam right now. Though we (and thousands upon thousands of others) are thrilled with Mega Crit’s latest deck-builder, the game’s Steam listing states its reviews are “Mostly Negative.” So, what’s going on here?

For the past month, Slay the Spire 2 has been subject to relentless review-bombing. Whereas the game sat at an “Overwhelming Positive” rating back in mid-March, with 97% of players recommending it to others, it is now listed as “Mostly Negative,” with 39% of its roughly 55,000 reviews in the last 30 days being unfavorable. Yet things get more intriguing when you look at where these negative reviews come from.

Continue Reading at GameSpot

YoloLiv 18mm F1.4 MFT Lens for YoloCam S7


YoloLiv Launches 18mm F1.4 Lens for YoloCam S7: A Complete Streaming Solution

In this article by Jose Antunes for ProVideo Coalition, YoloLiv announces its new 18mm F1.4 Micro Four Thirds (MFT) lens, purpose-built for the YoloCam S7 streaming camera. While the S7 impressed users with its 4K60 video, strong image quality, and interchangeable lens design, many creators struggled to find the right lens. This new release solves that issue by offering a dedicated, ready-to-use option that simplifies setup and enhances performance.

Developed in response to user feedback, the lens is part of YoloLiv’s broader effort to create a complete streaming ecosystem. Following the success of the YoloCam S3 webcam, the company designed this lens to fully optimize the S7’s capabilities. With a fast F1.4 aperture, users can achieve professional-looking visuals, including smooth background blur and improved low-light performance.

The lens features a compact, lightweight design with a 7-element optical structure, including aspherical and low-dispersion elements, plus nano coatings to reduce flare and ghosting. It supports both autofocus and manual focus, offers an aperture range from F1.4 to F22, and includes a stepper motor for smooth, precise focusing.

Read the full article by Jose Antunes for ProVideo Coalition HERE

Learn more about YoloLiv YoloCam S7 and MFT Lens HERE

Learn more about YoloLiv HERE

TypeScript 7 Beta Now Enabled by Default in Visual Studio 2026 18.6 Insiders 3


TypeScript 7 Beta Now Enabled by Default in Visual Studio 2026 18.6 Insiders 3

In Visual Studio 2026 18.6 Insiders 3 we have updated the built-in TypeScript SDK to TypeScript 7 Beta (native preview). The TypeScript SDK provides the compiler and language service used for TypeScript and JavaScript support in Visual Studio. This update impacts any project that uses the built-in SDK, including TypeScript projects, ASP.NET Core projects with npm packages, and any TypeScript or JavaScript files you are editing. If your project doesn’t have a specific TypeScript version installed, Visual Studio will use the new native compiler by default. In this post we will go over what this change means for you, how to use a different version of TypeScript if needed, and the known issues we are currently working on. You can download the latest Insiders release with the link below.

What is the TypeScript 7 native preview?

TypeScript 7 is a native port of the TypeScript compiler and tools. This is a significant change that brings native execution speed and shared-memory parallelism to the TypeScript compiler and language service. We have seen compile time improvements of up to 10x for large code bases, along with substantially reduced memory usage. If you are working with large TypeScript or JavaScript projects, you should see a noticeable improvement across your entire development experience.

In addition to faster compile times, the TypeScript language service has significant performance improvements as well. We have seen that the time to load projects has decreased roughly 8x. The improvements are not limited to load times; you should see a general speed improvement across the board with any features which interact with the TypeScript language service. Some of the Visual Studio features that benefit from these improvements include.

  • IntelliSense and completions. Code completions and parameter info should appear faster, especially in large projects where you may have previously noticed a delay.
  • Find All References. Searching for references across your solution is significantly faster.
  • Go to Definition. Navigating to definitions is more responsive.
  • Error diagnostics. Squiggles and error lists update more quickly as you type.
  • Project load times. Opening TypeScript and JavaScript projects in Visual Studio should be noticeably faster, with load times decreasing by roughly 8x.

If you are working with large code bases, you should see a noticeable improvement to your entire development experience. You will spend less time waiting for the IDE to respond and more time being productive working on your applications.

For more details on TypeScript 7 and the performance improvements, see the Announcing TypeScript 7.0 Beta blog post.

Using a different TypeScript version

Visual Studio ships with a built-in version of the TypeScript compiler and language service for cases where the project doesn’t specify a specific version to be used. Starting with this release, that built-in version is TypeScript 7 Beta. If you prefer to use a different version, you can install it in your project and Visual Studio will always use the project-local version over the built-in one.

Disabling TypeScript 7 native preview

If you want to go back to using the previous TypeScript language service, you can disable the native preview in Visual Studio. Go to Tools > Options > Preview Features and search for “native preview”. Uncheck the Enable JavaScript/TypeScript Native Language Service Preview option and restart Visual Studio.

Using TypeScript 6.x (GA)

To use the current stable release, install the typescript package in your project.

npm install -D typescript@^6.0.0

Using a specific TypeScript 7 native preview version

If you want to pin to a specific version of the native preview, install the @typescript/native-preview package.

npm install -D @typescript/native-preview@beta

In both cases, Visual Studio will detect the version in your node_modules and use that instead of the built-in SDK.

Known issues

TypeScript 7 brings significant performance improvements to Visual Studio, and we are continuing to refine the experience. Below are the known issues that we are actively working on. This is not an exhaustive list.

  • IntelliSense. You may notice completions not appearing in some cases. In .cshtml files, the TypeScript completion list may not appear inside a <script> tag. When accepting a completion for the last argument of a function, the closing parenthesis may be removed. Pressing Ctrl+Space can work around this.
  • Code Actions & Refactoring. Quick fixes (Ctrl+.) are not available yet. Only Copilot AI-based suggestions may appear. The Organize Imports command (Ctrl+R, Ctrl+G) is also not available.
  • Navigation & Search. The navigation bar dropdowns at the top of the editor do not show document symbols. Find All References (Shift+F12) shows a flat list without semantic grouping (read/write/declaration), and cross-file references may be incomplete. Code search results may show mismatched titles and descriptions.
  • CodeLens. Reference counts (e.g., “19 references”) do not appear above interface and class declarations.
  • Hover tooltips. Hover tooltips are missing the symbol icon and have different text coloring compared to the previous language service.
  • Snippets. Insert Snippet (Ctrl+K, Ctrl+X) does not work in JavaScript files.
  • JSDoc. Typing /** above a function with parameters does not auto-generate the JSDoc template with @param entries.
  • Formatting. Unchecking “Format on open block {” in Tools > Options > Text Editor > JavaScript/TypeScript > Formatting does not take effect.
  • Task List. If a TypeScript file contains both a TODO comment and a variable named “TODO”, the Task List may incorrectly show duplicate tasks.
  • File and folder rename. Renaming a file or folder in a TypeScript project does not consistently update import paths in other files.
  • File watching. When files are modified outside of Visual Studio, changes are not detected until the file is opened and modified inside the IDE. Errors from external edits will not appear in the Error List.

We appreciate your feedback as we work toward full parity.

Reporting feedback

If you have feedback on the TypeScript compiler, or language service, the best place to file feedback is the typescript-go GitHub repo.

If you are running into an issue that is specific to Visual Studio, you can share feedback with us via Developer Community: report any bugs or issues via report a problem and share your suggestions for new features or improvements to existing ones.

We would love if you could try out the new experience and let us know how it’s working for you. Please try it out and share your feedback with us.

 

Rivian downsizes DOE loan to $4.5B, while boosting capacity of Georgia factory


Rivian has reworked its loan deal with the Department of Energy and now expects to borrow $4.5 billion to build its new factory in Georgia, down from the original amount of $6.6 billion allocated under the Biden administration.

The company also announced Thursday that it will draw on the loan sooner than planned, in early 2027, and expects to increase the total capacity of the Georgia plant from 200,000 to 300,000 vehicles in its initial phase of operation — another sign that the company has high hopes for its upcoming R2 SUV.

The larger capacity — a 50% increase over its initial plans — will help lower its per unit costs, while also providing significant room for future expansion of capacity in later phases, the company said Thursday. 

Some of the factory’s capacity will be used to produce R2 robotaxis for Uber. Under a deal struck earlier this year, Uber is making an initial $300 million investment in Rivian and is expected to purchase 10,000 fully autonomous R2 robotaxis ahead of a planned rollout in San Francisco and Miami in 2028. That initial $300 million payment is expected to close in the second quarter, and another $250 million investment is planned for later this year, according to Rivian.

The ride-hailing company has the option to buy up to 40,000 more autonomous R2 SUVs from Rivian starting in 2030. Uber will has said it will invest up to $1.25 billion in Rivian through 2031 if the automaker meets a series of milestones.

Rivian broke ground on the Georgia factory late last year and is in the beginning stages of doing so-called vertical construction at the site located outside Atlanta. The company expects to start making vehicles by the end of 2028. Until then, Rivian will build R2 SUVs at its current factory in Normal, Illinois.

The company recently started production of the R2 despite the plant suffering damage from a tornado, and Rivian said Thursday it has made initial deliveries to employees. Deliveries to customers are expected to start “in the coming weeks, according to Rivian.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

The modifications to the DOE loan come as Rivian revealed financial results for the first quarter of 2026 on Thursday. The company generated $1.38 billion in revenue, with $908 million coming from vehicle sales and $473 million from software and services. The company lost $416 million in the quarter, down from a $541 million loss in the same period last year.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.