Techfullpost

Mind-Blowing Satellite Secrets Revealed: XRISM and Moon Sniper’s Epic Mission

Satellite Secrets Revealed

A groundbreaking satellite aimed at revolutionizing our understanding of celestial objects, and a lunar lander known as the “Moon Sniper,” embarked on a momentous journey on Wednesday night.

The Japanese Space Agency (JAXA) launch, after several weather-related delays, took place aboard an H-IIA rocket from the Tanegashima Space Center at 7:42 p.m. ET on Wednesday or 8:42 a.m. Japan Standard Time on Thursday.

The event was streamed live on JAXA’s YouTube channel, providing viewers with broadcasts in both English and Japanese.

The satellite, officially named XRISM (pronounced “crism”), short for X-Ray Imaging and Spectroscopy Mission, is a collaborative endeavor between JAXA and NASA, with contributions from the European Space Agency and Canadian Space Agency.

Accompanying XRISM is JAXA’s SLIM, or Smart Lander for Investigating Moon. This compact lunar lander is engineered to demonstrate an extremely precise landing within 100 meters (328 feet), as opposed to the typical kilometer range, utilizing high-precision landing technology. This remarkable precision earned the mission its moniker, “Moon Sniper.”

According to NASA, the satellite and its two instruments will focus on observing the universe’s hottest regions, largest structures, and objects with the most substantial gravitational forces. XRISM’s primary function is to detect X-ray light, a wavelength that is invisible to the human eye.

X-rays are emitted by some of the most energetic objects and events in the cosmos, which is why astronomers are eager to study them.

The satellite is equipped with thousands of curved, nested mirrors specifically designed for detecting X-rays, in contrast to other wavelengths of light. Upon reaching orbit, XRISM will require several months of calibration. The mission is designed to operate for three years.

XRISM is capable of detecting X-rays with energies ranging from 400 to 12,000 electron volts, significantly beyond the energy levels of visible light, which typically ranges from 2 to 3 electron volts, according to NASA. This broad range of detection will enable the study of cosmic phenomena across the universe.

The satellite houses two instruments: Resolve and Xtend. Resolve tracks minute temperature fluctuations to determine the source, composition, motion, and physical state of X-rays. Resolve operates at a chilling temperature of minus 459.58 degrees Fahrenheit (minus 273.10 degrees Celsius), approximately 50 times colder than deep space, thanks to a container of liquid helium.

This instrument will enable astronomers to unlock the mysteries of cosmic phenomena, including the chemical details of hot gas within galactic clusters.

Meanwhile, Xtend provides XRISM with one of the most extensive fields of view on an X-ray satellite.

“The spectra XRISM collects will be the most detailed we’ve ever seen for some of the phenomena we’ll observe,” said Brian Williams, NASA’s XRISM project scientist at Goddard. “The mission will provide us with insights into some of the most difficult places to study, like the internal structures of neutron stars and near-light-speed particle jets powered by black holes in active galaxies.”

In parallel, SLIM will employ its own propulsion system to embark on a journey toward the moon. The spacecraft will reach lunar orbit approximately three to four months after launch, orbit the moon for one month, and then begin its descent, aiming for a soft landing within four to six months after launch. If the lander achieves its goal, it will also conduct a brief study of the lunar surface.

Unlike recent lunar lander missions that have targeted the moon’s south pole, SLIM has set its sights on a location near a small lunar impact crater called Shioli, near the Sea of Nectar. Here, it will investigate rock compositions that could help scientists uncover the moon’s origins. The chosen landing site is just south of the Sea of Tranquility, where Apollo 11 made its historic landing near the moon’s equator in 1969.

Achieving precise landings on the moon is a significant goal for JAXA and other space agencies, particularly in resource-rich areas such as the lunar south pole, where permanently shadowed regions contain water ice, and numerous hazards like craters and rocks must be navigated. Future missions will require pinpoint landings to avoid these obstacles.

Moreover, SLIM’s lightweight design holds promise for agencies planning more frequent missions and exploring the moons of other planets, such as Mars. If SLIM achieves its objectives, JAXA believes it will revolutionize missions from “landing where we can” to “landing where we want.”

ADVERTISEMENT
RECOMMENDED
NEXT UP

Meta is betting big, perhaps too big, on artificial intelligence. As the global race to build AI infrastructure heats up, the social media giant is investing billions into what it believes will define the next era of computing. But as Wall Street’s latest reaction shows, not everyone is buying it.

The company, whose chief executive is Mark Zuckerberg, is constructing two giant data centers in the U.S. as part of a wider AI expansion. U.S. tech companies collectively will invest as much as $600 billion in infrastructure over the next three years, according to estimates from industry insiders, with Meta as one of the biggest spenders.

But as Silicon Valley celebrates the AI boom, investors are asking one question: whether Meta’s spending spree is sustainable, let alone strategic.

Earnings Reveal Soaring Costs — and Investor Doubts

Meta’s latest quarterly report showed a sharp rise in costs: operating expenses were up $7 billion year over year and capital expenditures rose nearly $20 billion, largely driven by the acquisition of AI infrastructure and talent. The company generated $20 billion in profit for the quarter, but investors focused on the ballooning expenses — and the lack of clear AI monetization.

During the earnings call, Zuckerberg defended the aggressive spending.

“The right thing is to accelerate this — to make sure we have the compute we need for AI research and our core business,” he said. “Once we get the new frontier models from our Superintelligence Lab (MSL) online, we’ll unlock massive new opportunities.”

But the reassurance didn’t land. Meta’s stock sank 12% by Friday’s close, wiping out more than $200 billion in market value within days.

Big Spending, Small Returns (For Now)

While Meta isn’t alone in its AI splurge – Google, Microsoft, Nvidia, and OpenAI are also spending billions on computing – the key difference is in the results. Google and Nvidia are already experiencing strong revenue growth thanks to AI, while OpenAI, although much more risky, has one of the fastest-growing consumer products in history, generating around $20 billion a year.

But Meta has yet to introduce the blockbuster AI product that would seem to justify the astronomical spending.

Its flagship Meta AI assistant reportedly serves over a billion users, but this is largely a factor of its embedding across Facebook, Instagram, and WhatsApp rather than organic adoption. Analysts say it still lags far behind in functionality and brand strength compared to competitors such as ChatGPT and Claude.

Meanwhile, Meta’s Vibes video generator, which gave the company a fleeting bump in engagement, has yet to prove its commercial viability. And while the Vanguard smart glasses it introduced with Ray-Ban do hold some promise for combining AI and augmented reality, they’re still more prototype than core business driver.

Zuckerberg’s Vision: Superintelligence and the Future

Undeterred by the skepticism, Zuckerberg insists Meta’s AI ambitions are only just getting started. He said the company’s Superintelligence Lab, or MSL, is working on next-generation “frontier models” that will power classes of products entirely new.

“It’s not just Meta AI as an assistant,” Zuckerberg said. “We expect to build new models and products — things that redefine how people and businesses interact with technology.”

Yet, he didn’t provide any details or timelines-a thing that frustrated analysts, who wanted some concrete projections. The promise of “more details in the coming months” wasn’t enough to calm investor nerves.

The AI Bubble Question

A massive infrastructure build-out at Meta has revived fears that the technology industry might be inflating yet another bubble. With tens of billions of dollars pouring into GPUs, data centers, and AI labs, some analysts warn that valuations in the sector are running ahead of tangible outcomes.

Yet, others argue that Meta’s financial position gives it more room to experiment. Unlike many AI startups, Meta still has a profitable advertising empire to fall back on. Its 3 billion monthly active users across its apps provide an unmatched data advantage — if it can find a compelling AI use case.

Where Does Meta Go From Here?

The direction of the company is not determined. Fundamental strategic questions are still hanging:

Will Meta use its vast personal data ecosystem to challenge OpenAI and Anthropic directly?

Does it want to integrate AI-powered advertising and business tools for enterprises?

Or will it shift to immersive consumer products, merging AI with AR/VR in the metaverse?

For now, those answers remain elusive. One thing is for sure: Zuckerberg is playing the long game, one that could either solidify Meta’s role in the next era of computing or turn into one of Silicon Valley’s most expensive miscalculations. As the AI arms race accelerates, Meta’s challenge isn’t just to build smarter machines — it’s to convince investors, and the world, that the company still knows where it’s going.

Redmond, Washington — In a bold move to expand its artificial intelligence infrastructure, Microsoft announced a $9.7 billion deal with data-center operator IREN that would give the tech giant long-term access to Nvidia’s next-generation AI chips. The agreement underscores how deeply the AI race has become defined by access to high-performance computing power.

That investment will also translate into a five-year partnership that lets Microsoft significantly ramp up its cloud computing and AI without having to immediately build new data centers or secure additional power—two of the biggest bottlenecks constraining Microsoft’s AI expansion today.

IREN Shares Spike Following Microsoft Partnership

Following that announcement, IREN’s stock soared as much as 24.7% to a record high before finishing nearly 10% higher by Monday’s close. The news also gave a modest lift to Dell Technologies, which will be supplying AI servers and Nvidia-powered equipment to IREN as part of the collaboration.

The deal includes a $5.8 billion equipment agreement with Dell, part of which involves IREN providing Microsoft with access to systems equipped with the advanced Nvidia chips known as the GB300.

Strengthening Microsoft’s AI Muscle

The move highlights the increasing competition between tech giants like Amazon, Google, and Meta in securing computing capacity that powers generative AI tools such as ChatGPT and Copilot among other machine-learning models.

Microsoft has invested heavily in OpenAI amid mounting infrastructure constraints, as demand for AI-powered services explodes across its cloud ecosystem. Earnings reports from major tech firms last week showed that a limited supply of chips and data-center capacity remains the cap on how much the industry can capitalize fully on the boom in AI.

In return, IREN gets an immediate infrastructure boost by partnering with Microsoft without the high upfront costs associated with building new hyperscale data centers. That is also a way to stay agile as the generations are coming fast from Nvidia.

“This deal is a strategic move by Microsoft to expand capacity while maintaining its AI leadership without taking on the depreciation risks tied to fast-evolving chip hardware,” said Daniel Ives, managing director at Wedbush Securities.

IREN’s Huge Expansion Plans

IREN, whose market value has risen more than sixfold in 2025 to $16.5 billion, operates several large-scale data centers across North America, with a combined total of 2,910 megawatts.

Under the new deal, the company will deploy Nvidia’s processors in phases through 2026 at its 750-megawatt Childress, Texas campus, where it is building liquid-cooled data centers designed to deliver approximately 200 megawatts of critical IT capacity.

The prepayment by Microsoft would finance IREN’s payment for Dell equipment valued at $5.8 billion. However, the deal comes with strict performance clauses that allow Microsoft to revoke the contract if delivery timelines are not met by IREN.

Rising “Neocloud” Powerhouses

The deal also speaks to the emergence of “neocloud” providers like CoreWeave, Nebius Group, and IREN — companies that specialize in selling Nvidia GPU-powered cloud computing infrastructure. These firms have become key partners for Big Tech companies trying to scale AI operations faster than traditional data-center timelines allow.

Earlier this year, Microsoft inked a $17.4 billion deal with Nebius Group, a similar provider, for cloud infrastructure capacity. Taken together, the moves mark Microsoft’s multi-pronged strategy to secure AI infrastructure from multiple partners amid global shortages of Nvidia hardware.

A Broader AI Infrastructure Push

On the same day, AI infrastructure startup Lambda revealed a multi-billion-dollar deal with Microsoft to deploy more GPU-powered cloud infrastructure using Nvidia’s latest hardware.

To the industry analysts, these rapid investments are part of a larger race to lock in supply chains for a resource now viewed as critical as oil in the digital economy: AI computing.

“We’re seeing the dawn of a whole new AI infrastructure ecosystem,” said Sarah McKinney, an AI market strategist. “Microsoft’s deals with IREN and Nebius show that the company is securing every possible avenue to power the next wave of AI applications.”

The Growing Infrastructure Challenge of AI

High demand for AI, meanwhile, has put incredible pressure on computing resources globally. As companies scramble to find GPUs and data-center capacity, the cost of AI infrastructure has soared.

The partnership with existing operators like IREN ultimately gives Microsoft flexibility to meet surging workloads with a minimum of capital expenditure and supply chain delays. This approach allows it to further diversify its geographic footprint, reducing risks associated with power constraints or regulatory hurdles in any single region.

With this agreement, Microsoft forges its status as one of the leaders in the world’s artificial intelligence ecosystem and positions its Azure cloud as a backbone for next-generation AI applications. For IREN, the partnership represents a turning point in its transformation from a low-profile data center provider to an important player in the infrastructure powering the AI revolution. As the world’s demand for AI accelerates, one thing is clear: the race for computing power is just getting underway, and partnerships like Microsoft’s $9.7 billion IREN deal will likely define who leads in the next decade of artificial intelligence.

ADVERTISEMENT
Receive the latest news

Subscribe To Our Weekly Newsletter

Get notified about new articles