Here is the daily technology #threadcast for 11/16/24. The goal is to make this a technology "reddit".
Drop all question, comments, and articles relating to #technology and the future. The goal is make it a technology center.
Here is the daily technology #threadcast for 11/16/24. The goal is to make this a technology "reddit".
Drop all question, comments, and articles relating to #technology and the future. The goal is make it a technology center.
I saw this yesterday and I'm happy about it. I also want to be present in these incredible moments.
Good job, Task!
Yeah I am not sure if my goal of 2,000 by year end is possible but activity is kicking up a bit.
It's a bold goal, but the good thing is that more people are helping too. So it is possible to achieve this.
Yeah. I am not sure about the timeline but we are adding some good content to the database is nothing else. A lot of connections arising from the data being provided.
The Coordinated Spatial Reuse (Co-SR) feature addresses varying signal strength between nearby devices and distant access points in high-density office scenarios. The capability enables APs to adjust and coordinate their power levels dynamically depending on the distance between devices and other APs to maintain the right signal strength. According to MediaTek's preliminary testing, this can improve overall system efficiency by 15%—25%.
Similarly, Wi-Fi 8's Coordinated Beamforming (Co-BF) advances previous beamforming technology by coordinating signal direction among multiple access points. This technology allows the system to avoid sending signals to areas and devices that are not needed, reducing interference and concentrating the signal toward active devices. In MediaTek's tests, Co-BF has improved throughput by 20%—50% in setups involving mesh networks shared in public spaces and some homes.
This sometimes happens to me. An oversight, we end up typing elsewhere without meaning to. LOL
Remember, this is the worst that AI will ever be. It will only get better from here.
True words my friend and so I'm anticipating a wonderful future where we get AI power and become a more productive civilization
yes indeed and I think there's going to be a time where communication with AI might surpass communication with humans. I need to talk to Chatgpt for it to do something and I'm starting to do that more than talking to friends
More than 50% of the traffic on the internet is already generated by computers. The overwhelming majority of email is computer generated. With that, we are looking at the same path for AI.
Computer to computer interaction will dominate. That is what AI agents are all about.
Oh so I'm even behind on the whole idea. I think it would be human to AI but next level AI to AI. That would be like multiplying data production because AI will produce more automatically than humans will.
Ai agents are already being built. That is the future. We each are going to have them along with each business.
Webpages will be dying in the future.
With the path OpenAI has taken which is their SearchGPT o totally believe it when you say webpages will be dying in the future.
It's just easier to get your answer straight than to scroll through webpages
Celebrities are leaving X for Bluesky, a social media platform offering more control, fewer bots and better privacy features. The platform now has 15 million users.
But I doubt it would reach X level.
Over the last 18 month, X went from 450 million to over 600 million.
So yeah, there is little chance. And notice how the advertisers are returning. It will be similar with users.
I couldn't agree more man, they're just leaving to come back. Elon barely cares though, he knows these guys are just going to have to return
If the sale still ends up in its hands, The Onion plans to rebrand Infowars as a parody of itself (more than it already was), poking fun at “weird internet personalities” like Jones, according to The New York Times. Ben Collins, the CEO of Global Tetrahedron, hasn’t said how much it paid to transform Infowars’ destructive self-parody into constructive satire. (Collins reported extensively on Infowars when covering misinformation at NBC News.) He plans to launch the rebooted site in January.
Several communist municipalities have even made him an honorary citizen, and protests have frequently been held outside his prison in Lannemezan, in southwestern France. "Georges Ibrahim Abdallah is the victim of a state justice that shames France," Nobel Prize-winning author Annie Ernaux said in a piece in the communist daily L'Humanite last month.
The Human Rights League, a leading French human rights NGO, has long maintained that Abdallah’s continued imprisonment violates human rights.
As far as making money, Microsoft was in the mix from as early as 2016, offering OpenAI $60 million worth of compute on Azure in exchange for, among other things, the companies “evangelizing” one another. No one seemed into this kind of corporate back-scratching, and Musk wrote that it made him “nauseous.”
They ultimately ended up paying far more but with no obligation on either side. “Would be worth way more than $50M not to seem like Microsoft’s marketing bitch,” wrote Musk.
Lastly, a minor nugget mentioned by board member Shivon Zilis (who would later become mother to three of Musk’s children): Valve founder Gabe Newell was, in addition to being a donator to the project in the early days, on Altman and Greg Brockman’s “informal advisory board.” It’s unclear what role he had or has in the day-to-day there. I’ve asked Newell for comment.
If the acquisition had happened, it could have benefited both companies. Cerebras would’ve avoided the path to a tricky IPO, while OpenAI might’ve had a vital resource in its race to build in-house chips.
OpenAI has long sought to reduce its reliance on Nvidia, which commands a massive share of the market for AI-optimized chips. While OpenAI is late to the in-house chip game — Google and Amazon Web Services, among others, have long offered chips designed for AI workloads — it’s under pressure to reduce the cost of model training, fine-tuning, and running. Having its own chips could be one way to attain the reductions it needs.
"Everything we do is to reward and support our retail diamondhands following," Moore wrote, referring to a term popularized in the crypto community for long-term believers.
Moore appears to have subsequently deleted his X account. His firm, 8VC, did not immediately respond to CNBC's request for comment.
OpenAI at one point hoped to establish a network of factories for chip manufacturing, and was considering an acquisition target. But it’s reportedly abandoned those plans in favor of aggressively building out a team of chip designers and engineers, and working with semiconductor firms Broadcom and TSMC to create an AI chip for running models. It could arrive as soon as 2026.
Last Monday after market close, Palantir reported third-quarter earnings and revenue that topped estimates and issued a fourth-quarter forecast that was also ahead of Wall Street's expectations. CEO Alex Karp wrote in the earnings release that the company "absolutely eviscerated this quarter," driven by demand for artificial intelligence technologies.
U.S. government revenue increased 40% from a year earlier to $320 million, while U.S. commercial revenue rose 54% to $179 million. On the earnings call, the company highlighted a five-year contract to expand its Maven technology across the U.S. military. Palantir established Maven in 2017 to provide AI tools to the Department of Defense.
According to the court documents, seen by TechCrunch, NSO had developed a suite of hacking tools to be used against targets using WhatsApp, capable of accessing private data on the target’s phone. The hacking suite was called “Hummingbird,” and two of the suite’s exploits were dubbed “Eden” and “Heaven.”
This suite cost NSO’s government customers — namely police departments and intelligence agencies — up to $6.8 million for a one-year license, and netted NSO “at least $31 million in revenue in 2019, according to one of the court documents.
Thanks to these hacking tools, NSO installed Pegasus on “between hundreds and tens of thousands” of target devices, according to a deposition by NSO’s head of research and development Tamir Gazneli.
Thiel's Palantir holdings have increased in value by about $3 billion since the earnings report and $2 billion since the election.
In September, S&P Global announced Palantir would join the S&P 500 stock index.
Analysts at Argus Research say the rally has pushed the stock too high given the current financials and growth projections. The analysts still have a long-term buy rating on the stock and said in a report last week that the company had a "stellar" quarter, but they downgraded their 12-month recommendation to a hold.
Until now, it wasn’t clear who was actually sending the malicious WhatsApp messages to target individuals with spyware. For years, NSO has claimed to have no knowledge of customers’ operations, and not be involved in carrying out the targeted cyberattacks. The newly released court documents cast doubt on some of NSO’s claims.
WhatsApp argued in one of the court documents that, “NSO’s customers’ role is minimal,” given that the government customers only needed to input the phone number of the target’s device and, citing an NSO employee, “press Install, and Pegasus will install the agent on the device remotely without any engagement.”
The postearnings rally coincides with the period following last week's presidential election. Palantir is seen as a potential beneficiary given the company's ties to the Trump camp. Co-founder and Chair Peter Thiel was a major booster of Donald Trump's first victorious campaign, though he had a public falling out with Trump in the ensuing years.
When asked in June about his position on the 2024 election, Thiel said, "If you hold a gun to my head I’ll vote for Trump."
“In other words, the customer simply places an order for a target device’s data, and NSO controls every aspect of the data retrieval and delivery process through its design of Pegasus,” WhatsApp argued.
The court filings cited an NSO employee as saying it “was our decision whether to trigger [the exploit] using WhatsApp messages or not,” referring to one of the exploits the company offered its customers.
When reached for comment, NSO spokesperson Gil Lainer said in a statement to TechCrunch: “NSO stands behind its previous statements in which we repeatedly detailed that the system is operated solely by our clients and that neither NSO nor its employees have access to the intelligence gathered by the system.”
On Wednesday, The Federal Bureau of Investigation (FBI) and the U.S. cyber watchdog agency CISA said China-linked hackers have intercepted surveillance data intended for American law enforcement agencies after breaking into an unspecified number of telecom companies.
Earlier in October, the Journal reported that Chinese hackers accessed the networks of U.S. broadband providers, including Verizon Communications, AT&T and Lumen Technologies, and obtained information from systems the federal government uses for court-authorized wiretapping.
NSO’s three exploits targeted WhatsApp users
One technique that NSO used to allow its customers to target WhatsApp users, described in one document, was to set up something the company called a “WhatsApp Installation Server,” or WIS, which WhatsApp calls a “fake client.” This was essentially a modified version of the WhatsApp app that NSO developed and used to send messages — including their malicious exploits — to regular WhatsApp users. NSO admitted setting up real WhatsApp accounts for its customers, per one of the court documents.
WhatsApp was able to defeat both NSO’s “Eden” and “Heaven” exploits with patches and security updates, according to an internal NSO communication.
Beijing has previously denied claims by the U.S. government and others that it has used hackers to break into foreign computer systems.
Another interesting detail that surfaced this week is the admission by one of the NSO employees deposed in the course of the lawsuit that Pegasus was used against Dubai’s Princess Haya, a case that was reported by the The Guardian and The Washington Post in 2021, and later by The New Yorker in 2023.
The same NSO employee said the spyware maker “disconnected” access to Pegasus for 10 customers, citing abuse of the spyware.
At this point in the legal case, WhatsApp is asking the judge to issue a summary judgment in the case, and is awaiting a decision.
The stock is getting hammered. After the shares soared more than 14-fold from the end of 2022 to their peak in March of this year, they've since plummeted by 85%. Super Micro's stock is now equal to where it was trading in May 2022, after falling another 11% on Thursday.
Getting delisted from the Nasdaq could be next if Super Micro doesn't file a compliance plan by the Monday deadline or if the exchange rejects the company's submission. Super Micro could also get an extension from the Nasdaq, giving it months to come into compliance. The company said Thursday that it would provide a plan to the Nasdaq in time.
“Eden/Heaven/Hummingbird R.I.P. announcement,” read a message sent to NSO employees.
The court documents show that NSO’s Heaven exploit was active before 2018, and was designed to direct target WhatsApp devices into communicating with a malicious WhatsApp relay server controlled by NSO.
After WhatsApp patched its systems against NSO’s Heaven exploit, NSO developed a new exploit called “Eden,” which an NSO employee quoted by the court documents said, “need[ed] to go through WhatsApp relay servers,” which the Heaven exploit had sought to avoid. It was the use of the Eden exploit that led to WhatsApp filing its lawsuit against NSO, according to a deposition by another NSO employee.
A third exploit developed by NSO, revealed in the documents, was called “Erised,” a so-called “zero-click” exploit that could compromise a victim’s phone without any interaction from the victim. WhatsApp blocked the use of NSO’s Erised exploit in May 2020, several months after WhatsApp had filed its lawsuit.
A representative for the Nasdaq said the exchange doesn't comment on the delisting process for individual companies, but the rules suggest the process could take about a year before a final decision.
The Nasdaq warned Super Micro on Sept. 17 that it was at risk of being delisted. That gave the company 60 days to submit a plan of compliance to the exchange, and because the deadline falls on a Sunday, the effective date for the submission is Monday.
If Super Micro's plan is acceptable to Nasdaq staff, the company is eligible for an extension of up to 180 days to file its year-end report. The Nasdaq wants to see if Super Micro's board of directors has investigated the company's accounting problem, what the exact reason for the late filing was and a timeline of actions taken by the board.
Meanwhile, the details that have come out from the lawsuit this week could help other people who have sued NSO in other countries, according to Natalia Krapiva, the tech legal counsel at Access Now, a nonprofit that has investigated some cases of abuse carried out with NSO’s spyware.
“WhatsApp’s sticking with their legal action finally reaps some benefits,” Krapiva told TechCrunch. “While it is true that NSO has not been sharing much information (especially things like Pegasus codes, list of customers, etc.), the information that they did share is already quite useful for this case but also for legal cases against NSO around the world.”
The Nasdaq says it looks at several factors when evaluating a plan of compliance, including the reasons for the late filing, upcoming corporate events, the overall financial status of the company and the likelihood of a company filing an audited report within 180 days. The review can also look at information provided by outside auditors, the SEC or other regulators.
With Grok, xAI aims to directly compete with companies including ChatGPT creator OpenAI, which Musk helped start before a conflict with co-founder Sam Altman led him to depart the project in 2018. It will also be vying with Google's Bard technology and Anthropic's Claude chatbot.
Now that Donald Trump is president-elect, Musk is beginning to actively work with the new administration on its approach to AI and tech more broadly, as part of Trump's inner circle in recent weeks.
Last week, Super Micro said it was doing everything it could to remain listed on the Nasdaq, and said a special committee of its board had investigated and found no wrongdoing. Super Micro CEO Charles Liang said the company would receive the board committee's report as soon as last week. A company spokesperson didn't respond when asked by CNBC if that report had been received.
If the Nasdaq rejects Super Micro's compliance plan, the company can request a hearing from the exchange's Hearings Panel to review the decision. Super Micro won't be immediately kicked off the exchange – the hearing panel request starts a 15-day stay for delisting, and the panel can decide to extend the deadline for up to 180 days.
Trump plans to repeal President Joe Biden's executive order on AI, according to his campaign platform, stating that it "hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology" and that "in its place, Republicans support AI Development rooted in Free Speech and Human Flourishing."
If the panel rejects that request or if Super Micro gets an extension and fails to file the updated financials, the company can still appeal the decision to another Nasdaq body called the Listing Council, which can grant an exception.
Ultimately, the Nasdaq says the extensions have a limit: 360 days from when the company's first late filing was due.
Hou this week urged the court to issue a temporary restraining order after noticing a filing by TuSimple China that signaled the company was about to transfer money (or already had) out of the United States. Two TuSimple China subsidiaries last week registered an increase in assets collectively worth $150 million, according to Hou’s declaration and information from public filings.
“These filings show a suspicious increase in registered assets between these two subsidiaries in one day as a precursor to large amount of cash transfer from U.S. to China,” reads the declaration. “The most likely scenario is that these filings in China were the preparatory steps before TuSimple U.S. transfers money to those subsidiaries in China.”
History also reveals just how long the delisting process can take.
Charles Liang, chief executive officer of Super Micro Computer Inc., right, and Jensen Huang, co-founder and chief executive officer of Nvidia Corp., during the Computex conference in Taipei, Taiwan, on Wednesday, June 5, 2024.
There's one factor at play that could hurt Super Micro's chances of an extension. The exchange considers whether the company has any history of being out of compliance with SEC regulations.
Between 2015 and 2017, Super Micro misstated financials and published key filings late, according to the SEC. It was delisted from the Nasdaq in 2017 and was relisted two years later.
Super Micro "might have a more difficult time obtaining extensions as the Nasdaq's literature indicates it will in part 'consider the company's specific circumstances, including the company's past compliance history' when determining whether an extension is warranted," Wedbush analyst Matt Bryson wrote in a note earlier this month. He has a neutral rating on the stock.
Hou added that such large cash transfers are “beyond normal course of business” and comparable to “TuSimple China’s heyday of operation when it was operating a large autonomous truck fleet in Shanghai” and had around 700 employees on its payroll. As of September, TuSimple China had around 200 employees.
The window of opportunity for shareholders like Hou to get what they want — which is for TuSimple to liquidate so they can recuperate some of their losses — is narrowing.
TuSimple is in a gray area when it comes to enforcement from the Securities and Exchange Commission. While TuSimple delisted earlier this year, the company is still registered with the SEC and thus subject to U.S. scrutiny. Once the money goes to China, shareholders in the U.S. will have no recourse to claw back funds from their original investment.
TechCrunch has reached out to the SEC to learn if the agency is investigating TuSimple in relation to shareholder complaints.
TuSimple did not immediately respond to TechCrunch’s request for comment.
Super Micro missed an annual report filing deadline in June 2017, got an extension to December and finally got a hearing in May 2018, which gave it another extension to August of that year. It was only when it missed that deadline that the stock was delisted.
In the short term, the bigger worry for Super Micro is whether customers and suppliers start to bail.
Will McDonald’s and Doodles actually be able to give NFTs a serious revival? Nobody can say for sure. But with incoming president Donald Trump’s embrace of cryptocurrencies, it seems likely that many people will try to ride this wave into something lucrative. When major brands like McDonald’s are dipping their toes back into the water, it really feels like something’s afoot.
President Joe Biden issued a new executive order on artificial intelligence — the U.S. government's first action of its kind — requiring new safety assessments, equity and civil rights guidance and research on AI's impact on the labor market.
While law enforcement agencies have warned that they're ready to apply existing law to abuses of AI and Congress has endeavored to learn more about the technology to craft new laws, the executive order could have a more immediate impact. Like all executive orders, it "has the force of law," according to a senior administration official who spoke with reporters on a call Sunday.
Aside from the compliance problems, Super Micro is a fast-growing company making one of the most in-demand products in the technology industry. Sales more than doubled last year to nearly $15 billion, according to unaudited financial reports, and the company has ample cash on its balance sheet, analysts say. Wall Street is expecting even more growth to about $25 billion in sales in its fiscal 2025, according to FactSet.
Super Micro said last week that the filing delay has "had a bit of an impact to orders." In its unaudited September quarter results reported last week, the company showed growth that was slower than Wall Street expected. It also provided light guidance.
Creating new safety and security standards for AI, including by requiring some AI companies to share safety test results with the federal government, directing the Commerce Department to create guidance for AI watermarking, and creating a cybersecurity program that can make AI tools that help identify flaws in critical software.
Protecting consumer privacy, including by creating guidelines that agencies can use to evaluate privacy techniques used in AI.
Advancing equity and civil rights by providing guidance to landlords and federal contractors to help avoid AI algorithms furthering discrimination, and creating best practices on the appropriate role of AI in the justice system, including when it's used in sentencing, risk assessments and crime forecasting.
Protecting consumers overall by directing the Department of Health and Human Services to create a program to evaluate potentially harmful AI-related health-care practices and creating resources on how educators can responsibly use AI tools.
Supporting workers by producing a report on the potential labor market implications of AI and studying the ways the federal government could support workers affected by a disruption to the labor market.
Promoting innovation and competition by expanding grants for AI research in areas such as climate change and modernizing the criteria for highly skilled immigrant workers with key expertise to stay in the U.S.
The company said one reason for its weak results was that it hadn't yet obtained enough supply of Nvidia's next-generation chip, called Blackwell, raising questions about Super Micro's relationship with its most important supplier.
"We don't believe that Super Micro's issues are a big deal for Nvidia, although it could move some sales around in the near term from one quarter to the next as customers direct orders toward Dell and others," wrote Melius Research analyst Ben Reitzes in a note this week.
Working with international partners to implement AI standards around the world.
Developing guidance for federal agencies' use and procurement of AI and speeding up the government's hiring of workers skilled in the field.
The order represents "the strongest set of actions any government in the world has ever taken on AI safety, security, and trust," White House Deputy Chief of Staff Bruce Reed said in a statement.
It builds on voluntary commitments the White House previously secured from leading AI companies and represents the first major binding government action on the technology. It also comes ahead of an AI safety summit hosted by the U.K.
The senior administration official referenced the fact that 15 major American technology companies have agreed to implement voluntary AI safety commitments but said that it "is not enough" and that Monday's executive order is a step toward concrete regulation for the technology's development.
The only other bidder was First United American Companies, which runs a website using Jones’ name to sell his supplements. The company reportedly placed a $3.5 million bid that, based on cash alone, would have won the secret auction. First United’s attorney reportedly told the judge on Thursday that the trustee changed the auction process days before, omitting a final round on Wednesday that would have given the parties a chance to outbid each other.
The trustee only chose from the sealed bids submitted last week. However, he said his decision followed Judge Lopez’s auction rules in September, describing the final round as optional.
Lopez struck a disapproving tone in court, throwing the sale into question. “We’re all going to an evidentiary hearing, and I’m going to figure out exactly what happened,” the judge reportedly said. “No one should feel comfortable with the results of this auction.”
"The President, several months ago, directed his team to pull every lever, and that's what this order does: bringing the power of the federal government to bear in a wide range of areas to manage AI's risk and harness its benefits," the official said.
In a speech Monday at the White House, Biden said he'll meet on Tuesday with Senate Majority Leader Chuck Schumer, D-N.Y., and a bipartisan group put together by Schumer. He said the meeting is to "underscore the need for congressional action."
"This executive order represents bold action, but we still need Congress to act," Biden said.
Elon Musk allowed Jones back on X (Twitter) last year after the platform “permanently” banned him in 2018 under its previous ownership.
As America’s chief satire publication (at least of those doing so consciously), The Onion’s (perhaps premature) announcement of the purchase stayed on brand. Its tone, hinting at what’s to come, resembled The Colbert Report on steroids — or maybe Jones’ “Survival Shield X-2” pills.
“Founded in 1999 on the heels of the Satanic ‘panic’ and growing steadily ever since, InfoWars has distinguished itself as an invaluable tool for brainwashing and controlling the masses,” The Onion wrote in a truth-meets-fiction announcement. “With a shrewd mix of delusional paranoia and dubious anti-aging nutrition hacks, they strive to make life both scarier and longer for everyone, a commendable goal. They are a true unicorn, capable of simultaneously inspiring public support for billionaires and stoking outrage at an inept federal state that can assassinate JFK but can’t even put a man on the Moon.”
Biden's executive order requires that large companies share safety test results with the U.S. government before the official release of AI systems. It also prioritizes the National Institute of Standards and Technology's development of standards for AI "red-teaming," or stress-testing the defenses and potential problems within systems. The Department of Commerce will develop standards for watermarking AI-generated content.
The order also addresses training data for large AI systems, and it lays out the need to evaluate how agencies collect and use commercially available data, including data purchased from data brokers, especially when that data involves personal identifiers.
The Biden administration is also taking steps to beef up the AI workforce. Beginning Monday, the senior administration official said, workers with AI expertise can find relevant openings in the federal government on AI.gov.
The administration official said Sunday that the "most aggressive" timing for some safety and security aspects of the order involves a 90-day turnaround, and for some other aspects, that time frame could be closer to a year.
The air leak was addressed in a recent report by NASA’s Office of Inspector General (OIG), which highlighted its true severity and the risk it poses to the crew. The OIG report stated that the two space agencies can’t seem to agree on the point at which the leak should be considered unsustainable. NASA and Roscosmos met to discuss the ISS air leak, with NASA officials noting that Roscosmos “is confident they will be able to monitor and close the hatch to the Service Module prior to the leak rate reaching an untenable level,” according to the report.
“Although the teams continue to investigate the causal factors for the crack initiation and growth, the U.S. and Russian technical teams don’t have a common understanding of what the likely root cause is or the severity of the consequences of these leaks,” Cabana is quoted in SpaceNews as saying.
Monday's executive order follows a number of steps the White House has taken in recent months to create spaces to discuss the pace of AI development, as well as proposed guidelines.
Since the viral rollout of ChatGPT in November 2022 — which within two months became the fastest-growing consumer application in history, according to a UBS study — the widespread adoption of generative AI has already led to public concerns, legal battles and lawmaker questions. For instance, days after Microsoft folded ChatGPT into its Bing search engine, it was criticized for toxic speech, and popular AI image generators have come under fire for racial bias and propagating stereotypes.
The rate of air leaking from the hole increased around a week before the February 14 launch of the Progress MS-26 cargo spacecraft, which docked to the aft end of Zvezda. The hatch that connects the module to the ISS remained open for five days as the crew offloaded the cargo from Progress MS-26 onto the space station, but was closed shut afterwards.
NASA and Roscosmos are currently monitoring the leak and preparing to close the hatch to the service module when access is not required in order to minimize the amount of air lost and isolate the leak itself from the rest of the space station. If required, the space agencies are prepared to permanently shut off the hatch should the leak rate became unmanageable. The ISS would function normally, but there would be one less docking port for spacecraft delivering cargo to the space station.
Biden's executive order directs the Department of Justice, as well as other federal offices, to develop standards for "investigating and prosecuting civil rights violations related to AI," the administration official said Sunday on the call with reporters.
"The President's executive order requires that clear guidance must be provided to landlords, federal benefits programs and federal contractors to keep AI algorithms from being used to exacerbate discrimination," the official added.
Why would Doodles need to do that? Because the crypto and NFT communities are absolutely swamped with scammers who pose as legitimate operations. And one of the common ways to do that is to create an account that looks identical to Doodles, buy a blue checkmark from Elon Musk for $8 which will push your reply to the top of all the responses, and compel people to click on a link that looks like it’s part of the original thread. Thus, saying “this is the last post in this thread,” helps make it clear that anything under that could be dangerous.
The chief marketing officer for McDonald’s Tariq Hassan touted the promotion in an email to Gizmodo on Thursday, seeming suitably optimistic about the partnership. And it seems clear that marketers are much more open to getting back into tech like NFTs than they were perhaps a year ago.
In August, the White House challenged thousands of hackers and security researchers to outsmart top generative AI models from the field's leaders, including OpenAI, Google, Microsoft, Meta and Nvidia. The competition ran as part of Def Con, the world's largest hacking conference.
"It is accurate to call this the first-ever public assessment of multiple LLMs," a representative for the White House Office of Science and Technology Policy told CNBC at the time.
“It may not be the first thing you think about when you talk about McDonald’s, but coffee is an integral part of the McDonald’s experience. Over the years we’ve continued to prioritize enhanced flavor, experience and even sustainability of our delicious coffee,” said Hassan.
“This year, we’re making our coffee even more special by giving our seasonal packaging a fresh and unexpected collaboration with Doodles, the perfect partner as so much of their content revolves around mornings and coffee moments,” Hassa continued. “The Doodles’ fanbase has for a long time fantasied about a McDonald’s collab, and it’s only fair to say THEY made it happen: we could not be more pleased to make our fandoms collide. Beyond the web3 scope, we chose Doodles because they are an agent of change in the cultural landscape, and they span beyond digital assets—an entertainment brand in itself with a fearless community we cannot wait to interact with and more importantly, bring to more people across the US.”
The competition followed a July meeting between the White House and seven top AI companies, including Alphabet, Microsoft, OpenAI, Amazon, Anthropic, Inflection and Meta. Each of the companies left the meeting having agreed to a set of voluntary commitments in developing AI, including allowing independent experts to assess tools before public debut, researching societal risks related to AI and allowing third parties to test for system vulnerabilities, such as in the competition at Def Con.
The goal of the project is to keep the scammer on the phone for as long as possible by engaging them in a lifelike, but meandering conversation. This is done without any input from humans other than the responses from the scammer.
A video shared by O2 of Daisy in action suggests the scammers very quickly become frustrated and angry at the way it responds. This includes giving out fake bank account information and personal details.
Murray Mackenzie, Director of Fraud at Virgin Media O2, explained that Daisy is "turning the tables on scammers — outsmarting and outmaneuvering them at their own cruel game simply by keeping them on the line."
He recommends anyone in the UK worried about fraud to forward any call or text they suspect of being from a scammer to 7726 for free so it can be investigated.
Dynamic Sub-Channel Operation (DSO) capability enables the network to assign sub-channels based on device requirements and abilities, increasing efficiency and boosting throughput by up to 80% (for advanced devices) while potentially avoiding bottlenecks.
Wi-Fi 8 will also incorporate refined data rates with additional levels in the Modulation Coding Scheme (MCS) lookup table, allowing devices to make smoother transitions in connection quality as they move through different areas. By adding finer gradations, such as a 16-QAM coding rate, Wi-Fi 8's MCS promises to reduce sudden drops in datarates, enhancing overall transmission stability and improving bandwidth by 5% to 30%, depending on the exact scenario.
The final Wi-Fi 8 standard is projected to be completed in 2028, and initial products based on the draft specification are anticipated in early 2028, pending regulatory approvals.
Reddy, whose brother reportedly witnessed the bizarre interaction, said she’d heard stories of chatbots — which are trained on human linguistic behavior in part — giving extremely unhinged answers.
This, however, crossed an extreme line.
“I have never seen or heard of anything quite this malicious and seemingly directed to the reader,” she said.
“If someone who was alone and in a bad mental place, potentially considering self-harm, had read something like that, it could really put them over the edge,” she worried.
In response to the incident, Google told CBS that LLMs “can sometimes respond with non-sensical responses.”
“This response violated our policies and we’ve taken action to prevent similar outputs from occurring.”
“Microsoft’s anticompetitive practices have escalated,” Musk’s attorney Marc Toberoff said in a statement. “Sunlight is the best disinfectant.”
Musk has a long-simmering opposition to OpenAI, a startup he co-founded and that has since become the face of generative AI through billions of dollars in funding from Microsoft.
Last Spring, Google also scrambled to remove other shocking and dangerous AI answers, like telling users to eat one rock daily.
In October, a mother sued an AI maker after her 14-year-old son committed suicide when the “Game of Thrones” themed bot told the teen to “come home.”
Musk has gained new prominence as a key force in President-elect Donald Trump’s incoming administration. Trump named Musk to a new role designed to cut government waste, after he donated millions of dollars to Trump’s Republican campaign.
The expanded lawsuit said OpenAI and Microsoft violated antitrust law by conditioning investment opportunities on agreements not to deal with the companies’ rivals. It said the companies’ exclusive licensing agreement amounted to a merger lacking regulatory approvals.
The latest data from Amazon reveals that 'Make America Great Again' baseball caps are among the hottest and best-selling new releases in the e-commerce giant's 'Sports-Specific Clothing' section. This follows President-elect Donald Trump's historic victory last week and suggests that previously closet supporters are now showing their support very overtly.
MAGA hats ranked number 2, 3, 14, 25, and 31 on the hottest new releases in the Sports-Specific Clothing section.
In a court filing last month, OpenAI accused Musk of pursuing the lawsuit as part of an “increasingly blusterous campaign to harass OpenAI for his own competitive advantage.”
The takeaway here is that Trump's victory, driven by his broad support across all races, sexes, social classes, and ethnicities—essentially everyday Americans who reject the so-called 'woke mind virus'—reflects a major political realignment and ushers in a new era in American politics, in which folks are openly expressing their love for patriotism, traditionalism, and displeasement of Marxism.
The persecution and prosecution of President Donald Trump is finally winding down.
Jack Smith, a primary player in the lawfare campaign against Trump, has filed to dismiss the case involving classified documents at Mar-a-Lago. Rep. Jim Jordan has instructed Special Counsel Smith to preserve all records related to the cases.
The Deep State tried everything to make Trump lose. In total, 91 frivolous felony charges were thrown at the former president. All so they could brand him a felon, tie up resources and prevent him from campaigning.
Then there was the January 6th “insurrection”, multiple Russian collusion hoaxes and countless media lies.
The Deep State even prosecuted his advisers, such as 75-year-old famed economist Peter Navarro, who was the first former White House official ever imprisoned on a contempt-of-court charge. This dignified gentleman was frog-marched into prison as part of a political persecution campaign.
The entire affair was a disgrace to the nation.
On Election Day, Americans rejected this vile lawfare.
And soon, it will be time for justice. With GOP control over both sides of the Congress and a near-landslide win, Trump has a mandate from the American people to pursue it aggressively.
Truth and Reconciliation Commission
President Trump has not been shy about his intentions, stating, “The departments and agencies that have been weaponized will be completely overhauled.”
On day one, he promised to reissue his executive order allowing the President to remove “rogue bureaucrats” from their positions. Trump promised to “wield that power very aggressively” against the Deep State.
Our once and future President even promised to establish a “Truth and Reconciliation Commission”, opening the books on issues including the JFK assassination, illicit spying, and government corruption. I can’t wait to see what it uncovers.
Trump’s recent speech was packed full of details on how he plans to drain the swamp. He starts strong and keeps going:
This is how I will shatter the Deep State and restore government that is controlled by the people and for the people…
Make every Inspector General’s office independent and physically separated from the departments they oversee so they do not become the protectors of the Deep State…
Launch a major crackdown on government leakers who collude with the fake news to deliberately create false narratives and to subvert our government and our democracy…
Clean out all of the corrupt actors in our national security and intelligence apparatus…
Push a constitutional amendment to oppose term limits on members of Congress.
“Shatter the Deep State”.
No ambiguity there...
Some will call what is coming revenge. But this will not be revenge. It will be justice. The distinction is important.
I would fully support justice here if the shoe was on the other foot, and the GOP were the offending party.
This type of behavior simply cannot stand. It undermines and corrupts the entire system. Re-establishing a just and fair government is critical.
It will be difficult, but I believe Trump will succeed this time. He has learned from the mistakes of his first term. Trump has the right people around today.
He has already rejected the idea of inviting Nikki Haley or Mike Pompeo to join the new administration. This is an excellent sign of things to come.
Given the mandate, the appointment of Attorney General will be particularly important. I have my eye on Mike Davis. He is the exact type of person this position calls for. Tough as nails, fair, and dedicated to cleaning up the system. I’ve met the man, and he’s just the type of person required for this job.
The stage is set for a historic draining of the swamp. Of course, there is still the potential for last-minute desperation moves by the Democrats, including their plan to disqualify Trump using the “insurrection clause”. But given my prediction that Congress will be controlled by Republicans, I think we’ll be in the clear in that department.
The finalized funding comes as the Biden administration and chipmakers reportedly rush to complete agreements before the incoming Trump administration. Polar Semiconductor was the first to finish negotiations and finalize its agreement. U.S. chip pioneer Intel (INTC) reportedly expects to finalize its $8.5 billion Chips Act grant before the end of the year.
During an appearance on The Joe Rogan Experience, president-elect Donald Trump criticized the Chips Act which was passed by the Biden administration in 2022 — partly to spur domestic advanced chip manufacturing.
Russia’s Gazprom PJSC has decided to play its favorite game: pipeline politics.
Starting November 16, Austria is off the guest list for Russian natural gas, following a €230 million ($242 million) arbitration spat between Gazprom and Austria’s OMV AG. OMV, refusing to let that cash slip away, decided to withhold payments to Gazprom.
“That chip deal is so bad, we put up billions of dollars for rich companies to come and borrow the money and build chip companies here, and they’re not going to give us the good companies anyway,” Trump said on the podcast, adding that the U.S. should instead put tariffs on chips coming into the U.S.
Meanwhile, TSMC’s chip production yields — or the number of functional chips it can produce per manufacturing process — at its Phoenix site are about four percentage points higher than those of comparable fabs in Taiwan, Bloomberg previously reported.
As one might have guessed, that ended poorly.
Unsurprisingly, European gas prices didn’t take the news well. Futures climbed 2.7% to €47.49 per megawatt-hour, as traders braced for yet another disruption in a continent that’s seen enough energy drama already.
Europe’s gas supply has been teetering on a knife’s edge since the 2022 energy crisis, with any whiff of trouble sending markets into a frenzy.
The Biden administration said the Chips Act funding includes a commitment to producing A16 chips in Arizona. In September, independent journalist Tim Culpan reported that TSMC is manufacturing A16 chips for Apple at Phase 1 of its Fab 21 in Phoenix. Production volume of the A16, which was launched in the iPhone 14 Pro in 2022, “will ramp up considerably” after the second stage of TSMC’s Phase 1 fab is finished, Taiwan-based Culpan said. That would put TSMC’s U.S. site on track to reach its target in the first half of next year, he said.
The comments come as Disney—along with other major media companies including Warner Bros. Discovery (WBD) (parent of HBO and Max), Comcast (CMCSA) (owner of NBC), and Paramount (PARA) (parent of CBS)—are all grappling with a shifting media landscape where streaming is rapidly eclipsing traditional television.
In 2023, Disney CEO Bob Iger hinted at the possibility of shedding the company’s linear TV networks, saying they “may not be core to Disney” anymore.
To OMV’s credit, they’re keeping calm and carrying on.
The company has proffered assurances that it will still be able to meet supply obligations through “alternative sources”—clear evidence that Europe’s increasingly interconnected gas network means that Austria is no longer entirely at Gazprom’s mercy.
Meanwhile, both Comcast and Warner Bros. Discovery are reportedly considering the idea of separating their streaming and studio assets from their struggling TV network businesses.
The House of Mouse on Thursday released mixed fourth-quarter results, with growth driven primarily by its streaming business, while the company’s traditional TV division continued to struggle.
Disney said its streaming platforms’ operating income rose to $321 million in the three months ending Sept. 30, compared with a loss of $387 million during same period in 2023.
Still, the timing stings, with winter breathing down Europe’s neck.
Even the mere hint of a supply squeeze has governments nervous about heating bills and energy security.
Gazprom’s move is a reminder of Russia’s waning-but-still-potent energy influence in Europe. Sure, the continent has spent the last two years diversifying its energy sources—snapping up LNG cargos and tapping alternative pipelines—but Gazprom’s ability to cause a stir is alive and well.
It took Disney five years since the launch of Disney+ to finally turn a profit from its streaming business. Last quarter, Disney reported that its streaming services, which also include Hulu and ESPN+, turned a profit for the first time, and earlier than anticipated. The company had previously expected its streaming services to first turn a profit in the fourth quarter.
Disney continues to face challenges with its traditional television assets, including the ABC broadcast network and cable channels like the Disney Channel, National Geographic, and FX. In its fourth fiscal quarter, the company reported a 38% drop in operating income from its linear networks, falling to $498 million from $805 million in the same period of 2023.
As Europe’s heating season kicks off, this spat between Gazprom and OMV highlights just how fragile the energy landscape remains.
If nothing else, it’s a timely reminder for Europe to keep working on those contingency plans—and maybe stockpile a few extra blankets. Winter is here, and with it, more geopolitical games.
The news comes as Gazprom raised in October its 2024 investment plans by 4% to $16.9 billion, and as Europe’s benchmark natural gas prices surged on Thursday to the highest levels since last November.
“The top quark, in particular, is a very promising probe of QGP’s evolution over time. As the heaviest known elementary particle, the top quark decays into other particles an order of magnitude faster than the time needed to form QGP,” the researchers note.
“The delay between the collision and the top quark’s decay products interacting with the QGP could serve as a ‘time marker’, offering a unique opportunity to study the QGP’s temporal dynamics,” they added.
To first detect top quarks, researchers ran the large hadron collider (LHC) and conducted an experiment involving the collision of lead (Pb) ions at 5.02 teraelectronvolts (TeV) per nucleon pair energy.
A French court on Friday ordered the release of Lebanese pro-Palestine activist Georges Ibrahim Abdallah, Europe's longest-held political prisoner, after 40 years in prison.
Abdallah, a former guerrilla with the Popular Front for the Liberation of Palestine (PFLP), was sentenced to life in prison in 1987 for his alleged involvement in the 1982 murders of US military attache Charles Robert Ray and Israeli diplomat Yacov Barsimantov.
They noticed the formation of top quarks and their quick decay into other particles, including a bottom quark and a W boson. The W boson further broke down into either an electron or a muon, along with another particle called a neutrino.
“The result has a statistical significance of 5.0 standard deviations, making it the first observation of top-quark-pair production in nucleus-nucleus collisions,” stated the ATLAS team.
The 73-year-old has appealed his conviction 11 times since becoming eligible for release in 1999. The court said the communist activist would be released on December 6 on the condition that he leaves France and does not return, French anti-terror prosecutors said in a statement to AFP.
The prosecutors said they would appeal the court's decision, leaving the timing of Abdallah’s release uncertain.
The Lebanese activist, born to a Christian family in the northern village of Koubayat, has long maintained that he was not a "criminal" but "a fighter" who battled for the rights of Palestinians.
There is still some uncertainty
The above-mentioned results were obtained during the second run of the LHC. During the run, the study’s authors reported the rate of top-quark production with 35% relative uncertainty.
"The path I followed was dictated by the human rights violations perpetrated against Palestine," he told the judges during his latest appeal for release.
Wounded in 1978 during Israel's invasion of Lebanon, Abdallah, a secondary school teacher, joined the Marxist-Leninist PFLP, which carried out a series of plane hijackings during the 1960s and 1970s.
A year later, Abdallah, along with his brothers and cousins, founded his own pro-Palestine armed group, the Lebanese Armed Revolutionary Factions (LARF). The group had contact with other far-left armed outfits, including France's Action Directe, Italy's Red Brigades and the German Red Army Faction (RAF).
The Lebanese anti-Israeli Marxist group claimed responsibility for five attacks, including four in France in 1981 and 1982.
This means that the value of the measured production rate could be off by as much as ±35%. So, if the scientists measured a top-quark production rate of, say, 100 events, the true rate could be anywhere between 65 and 135 events, based on the uncertainty.
They suggest that the third run of LHC would result in more accurate measurements. Hopefully, the result of the current and future top quark experiments will reveal valuable insights into QGP.
'Honor of being accused'
In 1986, Abdallah was sentenced in Lyon to four years in prison for criminal association and possession of weapons and explosives. He was tried the following year for complicity in the assassination of Ray and Barsimantov, as well as for the attempted assassination of a third American diplomat in 1984.
In the murder trial, one of the French secret services' sources was Abdallah's lawyer, Jean-Paul Mazurier, who later revealed that he was an intelligence agent.
In court, Abdallah denied the accusation but declared: "If the people did not entrust me with the honor of participating in these anti-imperialist actions that you attribute to me, at least I have the honor of being accused of them."
Becker, a former Commander of the Naval Information Warfare Systems Command, noted that integrating advanced technology initially designed for government and military purposes into public use is nothing new.
"The internet began as a defense research initiative before becoming available to the public, where it’s now a basic expectation," Becker said.
Abdallah was then sentenced to life in prison, a far more severe punishment than the 10-year sentence sought by the attorney general. His lawyer, Jacques Verges, who previously defended clients such as Venezuelan militant Carlos the Jackal, saw the verdict as "a declaration of war".
A support committee was immediately formed, demanding Abdallah's "immediate release". The longest-serving prisoner in France has never expressed regret for his actions.
"He is doing well intellectually. He is an activist. He sticks to his guns, reads a lot and keeps himself very informed about what is happening in the Middle East. People write to him from all over the world," his lawyer, Jean-Louis Chalanset, told AFP in 2022.
Anthropic is only the latest AI developer to offer its technology to the U.S. government.
Following the Biden Administration’s memorandum in October on advancing U.S. leadership in AI, ChatGPT developer OpenAI expressed support for U.S. and allied efforts to develop AI aligned with “democratic values.” More recently, Meta also announced it would make its open-source Llama AI available to the Department of Defense and other U.S. agencies to support national security.
'A political victory'
"I am the victim of a political decision," Abdallah said shortly before the verdict on Friday.
Washington has consistently opposed Abdallah's release, while Lebanese authorities have repeatedly called for his freedom.
During Axios' Future of Defense event in July, retired Army General Mark Milley noted advances in artificial intelligence and robotics will likely make AI-powered robots a larger part of future military operations.
“Ten to fifteen years from now, my guess is a third, maybe 25% to a third of the U.S. military will be robotic,” Milley said.
Since 1999, the year he became eligible for release, all his parole requests have been rejected except one in 2013, when he was granted release on the condition that he be expelled from France.
When his request was granted that year, then-US Secretary of State Hillary Clinton contacted French Foreign Minister Laurent Fabius, saying in diplomatic cables revealed by WikiLeaks: "Although the French government has no legal authority to overturn the Court of Appeal's decision, we hope French officials might find another basis to challenge the decision's legality."
French Interior Minister Manuel Valls then refused to proceed with the order and Abdallah remained in jail.
In anticipation of AI's pivotal role in future conflicts, the DoD’s 2025 budget requests $143.2 billion for Research, Development, Test, and Evaluation, including $1.8 billion specifically allocated to AI and machine learning projects.
Protecting the U.S. and its allies is a priority. Still, Dr. Benjamin Harvey, CEO of AI Squared, noted that government partnerships also provide AI companies with stable revenue, early problem-solving, and a role in shaping future regulations.
"AI developers want to leverage federal government use cases as learning opportunities to understand real-world challenges unique to this sector," Harvey told Decrypt. "This experience gives them an edge in anticipating issues that might emerge in the private sector over the next five to 10 years.
Chalanset told AFP that the court's decision on Friday is not contingent on the government issuing such an order, calling it "a legal and a political victory". However, under French law, an appeal can suspend the court's decision, effectively deferring its execution.
Over the years, Abdallah's fate has mobilized activists close to the French Communist Party and the far left, who have accused successive governments of employing relentless tactics regarding the political prisoner's release.
He continued: "It also positions them to proactively shape governance, compliance policies, and procedures, helping them stay ahead of the curve in policy development and regulatory alignment."
Harvey, who previously served as chief of operations data science for the U.S. National Security Agency, also said another reason developers look to make deals with government entities is to establish themselves as essential to the government’s growing AI needs.
With billions of dollars earmarked for AI and machine learning, the Pentagon is investing heavily in advancing America’s military capabilities, aiming to use the rapid development of AI technologies to its advantage.
While the public may envision AI’s role in the military as involving autonomous, weaponized robots advancing across futuristic battlefields, experts say that the reality is far less dramatic and more focused on data.
Several years ago, after a particularly violent weekend in Chicago, then-Mayor Rahm Emmanuel, said: “This may not be politically correct, but I know the power of what faith and family can do. ... Our kids need that structure. ... I am asking ... that we don’t shy away from a full discussion about the importance of faith and family to develop and nurture character, self-respect, a value system, and a moral compass that allows kids to know good from bad and right from wrong.”
"In the military context, we’re mostly seeing highly advanced autonomy and elements of classical machine learning, where machines aid in decision-making, but this does not typically involve decisions to release weapons," Kratos Defense President of Unmanned Systems Division, Steve Finley, told Decrypt. “AI substantially accelerates data collection and analysis to form decisions and conclusions."
Emmanuel’s plea for a broader discussion indicates that something must be truly amiss. And it is, as a new study directed by Nicholas Zill for the Institute for Family Studies indicates.
Looking at cities in Ohio, Zill found that there was a much crime rate in cities where two-parent families were in the minority. For instance, only 44 percent of mothers in Springfield, Ohio, were married during the period of 2018–2022. The percentage was even worse in Cleveland with only 33 percent being married, and in Youngstown, which reported only 32 percent were married. Cincinnati fared marginally better at 46 percent.
In contrast, in Cleveland Heights, 63 percent of mothers were married and in New Albany, Ohio, 91 percent were.
Founded in 1994, San Diego-based Kratos Defense has partnered extensively with the U.S. military, particularly the Air Force and Marines, to develop advanced unmanned systems like the Valkyrie fighter jet. According to Finley, keeping humans in the decision-making loop is critical to preventing the feared "Terminator" scenario from taking place.
“If a weapon is involved or a maneuver risks human life, a human decision-maker is always in the loop,” Finley said. “There's always a safeguard—a 'stop' or 'hold'—for any weapon release or critical maneuver."
ElevenLabs is part of a growing list of companies focused on creating realistic AI voices both in the US and internationally.
Last year, leading up to the launch of the Phantom Liberty expansion for Cyberpunk 2077, CD Projekt Red collaborated with Ukraine-based voice AI tool developer Respeecher to recreate the voice of Polish actor Miłogost "Miłek" Reczek, who passed away in 2021.
McNally, who also worked as a publicist for the Grateful Dead, is the author of “A Long Strange Trip: The Inside History of the Grateful Dead,” as well as the editor of “Jerry on Jerry: The Unpublished Jerry Garcia Interviews.”
ElevenLabs said the ElevenReader can also help individuals with visual impairments or learning disabilities.
And the differences between these cities and their rates of violent crime are startling. Zill found that in Springfield, there were 1,298 incidents of violent crime reported per 100,000 residents, 1,895 incidents in Cleveland, 800 in Cincinnati, and 699 in Youngstown. Meanwhile, Cleveland Heights only reported 267 incidents and New Albany had 99.
This is not surprising. It has been well documented how the rise of fatherless homes has led to a concurrent rise in incarceration rates. Twenty years ago, Cynthia Harper of the University of Pennsylvania and Sara S. McLanahan of Princeton University found that young men who grow up in fatherless homes are twice as likely to end up in jail as those who come from traditional two-parent families.
The numbers of single-parent homes have only gotten worse since.
Out-of-wedlock births are now rampant among all groups. In 2022, 39.8 percent of children were born to single mothers. In Louisiana, Mississippi, and New Mexico, the percentage is even higher: over 48 percent.
The issue of missing fathers is particularly acute in our cities but has serious consequences for our society as a whole. Single mothers can be great mothers, but in a single-parent home, as Emmanuel noted, something is lacking—something necessary for children’s emotional and mental development.
Respeecher later partnered with Calm to bring the vocal styling of “It’s a Wonderful Life” star Jimmy Stewart to the app in December. In September, Google launched its AI-powered NotebookLM, which allows users to turn articles and videos into audio podcasts with the click of a button.
What is lacking is the unique role a father plays in a child’s life.
For instance, fatherless girls often become severely depressed, self-destructive, or sexually promiscuous as they seek to fill the emotional vacuum left by an absent father.
Boys, on the other hand, as this study about the link between the lack of two-parent homes and violent crime documents, tend to deal with that void with anger and rage. Thus, many of the tragic shootings or horrible abuses of women we have seen over the past several years have been instigated by boys from broken homes.
Launched in July, Moonshot has amassed more than 90,000 downloads across the iOS App Store and Google’s Android Play Store based on data from SensorTower.
Finally, numerous studies have shown that children in single-parent homes are more likely to engage in substance abuse than those in stable, two-parent (mother and father) homes. These children eventually grow up into adults and bring their drug dependency with them, creating another generation of children trapped in the cycle of family dysfunction, drug abuse, and single parenthood. It is a triple whammy resulting in a downward spiral of despair with each succeeding generation.
Aimed at onboarding retail participants—or those with less crypto knowledge—Moonshot allows users to bypass the complications of wallet seed phrases and decentralized exchanges to invest in meme coins, or highly volatile tokens based on internet memes, celebrities, and more.
Instead, Moonshot allows users to deposit with popular payment methods like Venmo or debit card, granting them nearly instant access to meme coins otherwise not available on centralized exchanges like Coinbase.
Thus, a society is formed where the dividing line between the haves and have-nots is determined at the very beginning of life. If children are born into a stable, two-parent family they are more likely to be successful in life and avoid bad choices such as engaging in violence and substance abuse. If they are born into the instability of a continued cycle of a broken family, they will likely fall prey to the resulting pathologies.
That is why, if we are to truly deal with the current violence in our inner cities, we need to focus first on the behaviors that have led to that violence—which means a dedicated effort to restore two-parent families rather than continuing to ignore the issue by enacting policies that encourage broken families. That is my hope—and the result of such an effort will not only be healthier children, but a safer and healthier society as well.
The rise up the rankings corresponds to company reported record breaking fiat deposits on Moonshot, and a new daily high revenue of more than $130,000 on Nov. 12 according to data from DefiLlama.
“Boomers are rotating. Coming for #1,” Moonshot commented on Twitter (aka X) as the app overtook TradFi investing app E-Trade.
However, competition is fierce—and more is coming. Established platforms like Photon and BullX offer more sophisticated traders access to a greater breadth of meme coins and earlier entry points directly on-chain. Meanwhile, the team behind Tensor—the Solana-based NFT marketplace—is gearing up to release its social trading platform Vector.fun.
3D-printed capsules could be a solution
To address this challenge, LLNL has launched a research project to develop 3D-printed fuel capsules.
“Now that we have achieved and repeated fusion ignition, the Lab is rapidly applying our decades of know-how into solving the core physics and engineering challenges that come with the monumental task of building the fusion ecosystem necessary for a laser fusion power plant,” said Tammy Ma, lead for LLNL’s inertial fusion energy institutional initiative.
The former offensive tackle, who retired from professional football in 2023, recently took to social media to share his win, declaring, “Results silence all debates.”
Interestingly, the former player, who is currently fronting an initiative dubbed Bitball that advocates paying athletes in BTC, has indicated that he will not be cashing in on his windfall any time soon. “How long will I hold Bitcoin? Okay, how long do I plan to breathe?” Okung said in a November 14 post on X.
The project is developing a first-of-its-kind dual-wavelength, two-photon polymerization (DW-2PP) approach to 3D printing. This technique uses two different light sources to selectively print different materials.
This enables the creation of complex geometries with sub-micron resolution, potentially enabling the production of fuel capsules at the scale required for a power plant.
“We are focusing on a specific type of wetted-foam capsule, in which liquid DT can be wicked into a uniform foam layer on the inside of the spherical capsule by capillary action,” said Xia, co-principal investigator and a staff scientist in the Lab’s Materials Engineering Division.
Trump’s Election Win Aftermath
Bitcoin’s recent price rally came after Donald Trump’s victory in the 2024 U.S. presidential election. The rally attracted major inflows into digital assets, with the OG cryptocurrency alone pulling in $1.8 billion, according to CoinShares’ Digital Asset Fund Flows weekly report.
The rally extended beyond Bitcoin, with Ethereum gaining $157 million in inflows and notable activity among altcoins like Solana, Uniswap, XRP, and Tron.
Overall, the total cryptocurrency market capitalization reached a new all-time high of $3.12 trillion in early Asian trading on Tuesday, Nov. 12. This surpassed the previous record of $3.08 trillion set nearly three years ago in November 2021.
“The current DT ice layering process takes up to a week to complete with extreme meticulousness. It’s possible that 3D printing is the only tool for this kind of complex geometry at scale.”
The project has already shown promising results, with 3D-printed targets successfully used in two NIF experiments in 2024.
While the use of 3D printing for fusion energy is still in its early stages, it represents a potential solution to a critical manufacturing challenge.
If this technology is successful, it could speed up the development of fusion power plants. This would bring the world closer to a future with clean, safe, and abundant energy.
“Unlocking fusion is a strategic asset for US competitiveness. It’s imperative that we invest in fundamental science and technology to build on the historic achievement of fusion ignition,” concluded Jeff Wisoff, principal associate director for LLNL’s NIF & Photon Science Directorate.
2016
Google DeepMind's AlphaGo defeated Lee Sedol, one of the world's top Go players. Go, a complex board game with more possible moves than atoms in the universe, had long been considered a challenge for AI.38 AlphaGo's 4–1 victory over Sedol is a groundbreaking moment in AI, showcasing the power of deep learning techniques to handle highly complex strategic tasks that had previously been beyond AI's capabilities.
Hanson Robotics introduced Sophia, a highly advanced humanoid robot.39 Sophia can recognize faces, make eye contact and hold conversations using a combination of image recognition and natural language processing.
A lawsuit by the world’s wealthiest man against one of the fastest growing companies of all time is necessarily interesting stuff. But while the allegations are yet to be proven, the case has already exposed a batch of emails between Elon Musk, Sam Altman, and others during OpenAI’s early days. Here are a few of the more interesting snippets we found while perusing their correspondence.
Bear in mind that these emails were exposed as part of an attempt to prove OpenAI is somehow breaking antitrust law (a frankly implausible allegation). Musk is also revealing to some extent his feeling of betrayal when OpenAI abandoned its original vision of being a nonprofit with the Tesla CEO as its leader.
They do not tell the whole story, but they are still interesting in their own right.
The current structure provides you with a path where you end up with unilateral absolute control over the AGI [artificial general intelligence]. You stated that you don’t want to control the final AGI, but during this negotiation, you’ve shown to us that absolute control is extremely important to you.
As an example, you said that you needed to be CEO of the new company so that everyone will know that you are the one who is in charge, even though you also stated that you hate being CEO and would much rather not be CEO.
This is as you say it interesting stuff. I didn't know the agreement was this deep. No wonder Elon is taking this personal. Sam really went against his promises. Maybe a Promise of safety will be xAI advantage of OpenAI to the investors
Thus, we are concerned that as the company makes genuine progress towards AGI, you will choose to retain your absolute control of the company despite current intent to the contrary.
The goal of OpenAI is to make the future good and to avoid an AGI dictatorship. You are concerned that Demis [Hassabis, at Google-owned DeepMind] could create an AGI dictatorship. So do we. So it is a bad idea to create a structure where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility.
Sutskever also voices worries about Altman, using words much like the board would later use while accusing him of not being “consistently candid”:
We haven’t been able to fully trust your judgements throughout this process, because we don’t understand your cost function.
We don’t understand why the CEO title is so important to you. Your stated reasons have changed, and it’s hard to really understand what’s driving it.
Is AGI truly your primary motivation? How does it connect to your political goals?
In the event we decide to buy Cerebras, my strong sense is that it’ll be done through Tesla.
This, by the way, was back when Musk was angling to have OpenAI be just one of his many properties, and the leaders were open to that possibility. As OpenAI co-founder Andrej Karpathy wrote:
The most promising option I can think of, as I mentioned earlier, would be for OpenAI to attach to Tesla as its cash cow. […] If we do this really well, the transportation industry is large enough that we could increase Tesla’s market cap to high O(~100K), and use that revenue to fund the AI work at the appropriate scale.
OpenAI was at one point considering acquiring Cerabras, an AI chipmaking company that's in the process of going public.
Elon Musk’s ongoing lawsuit against OpenAI has new exhibits that describe how OpenAI was contemplating snatching up Cerebras in or around 2017 — a year after Cerebras’ founding, and just a few years after OpenAI began operating.
In an email addressed to OpenAI CEO Sam Altman and Musk, Ilya Sutskever, one of OpenAI’s co-founders and ex-chief scientist, floated the idea of buying Cerebras through Tesla, Musk’s EV company. At the time, Musk was financially involved in OpenAI and exerted some influence over its direction.
#elonmusk #openai #lawsuit #samaltman #cerebras #semiconductors #ai
“In the event we decide to buy Cerebras, my strong sense is that it’ll be done through Tesla,” Sutskever wrote in September 2017. “But why do it this way if we could also do it from within OpenAI? Specifically, the concern is that Tesla has a duty to shareholders to maximize shareholder return, which is not aligned with OpenAI’s mission. So the overall result may not end up being optimal for OpenAI.”
In an earlier email dated July 2017 from Sutskever to Musk and OpenAI co-founder Greg Brockman (now the company’s president), Sutskever mentions several Cerebras-related agenda items: “Negotiate merger terms with Cerebras” and “More due diligence with Cerebras.”
The merger deal would ultimately fall through, although it’s not clear from the exhibits why. And OpenAI would end up shelving its chip ambitions for years.
Cerebras, based in Sunnyvale, California, builds custom hardware to run and train AI models and claims its chips are faster and more efficient than Nvidia’s flagship offerings for AI workloads.
Having raised $715 million in venture capital, Cerebras is reportedly seeking to roughly double its $4 billion valuation through the IPO. But it faces considerable challenges. A single Abu Dhabi firm, G42, accounted for 87% of Cerebras’ revenue in the first half of 2024, and U.S. lawmakers have expressed unease about G42′s historic ties to China. Cerebras CEO Andrew Feldman also has a checkered past, having pled guilty to circumventing accounting controls while a VP at public company Riverstone Networks.
Palantir's stock has been on a tear since the company's better-than-expected earnings report last week, which was a day before the U.S. presidential election.
Palantir shares continued their torrid run on Friday, soaring 11% to a record, after the developer of software for the military announced plans to transfer its listing to the Nasdaq from the New York Stock Exchange.
The stock jumped past $65.77 at the close, lifting the company's market cap to $150 billion. The shares are now up more than 45% since Palantir's better-than-expected earnings report last week and have almost quadrupled in value this year.
Palantir said late Thursday that it expects to begin trading on the Nasdaq on Nov. 26, under its existing ticker symbol "PLTR." While changing listing sites does nothing to alter a company's fundamentals, board member Alexander Moore, a partner at venture firm 8VC, suggested in a post on social media site X that the move could be a win for retail investors because "it will force" billions of dollars in purchases by exchange-traded funds.
We see AI companies like OpenAI, Google and Anthropic are facing a bit of a challenge with their new models
These could be rumors
OpenAI's Orion didn't perform as expected and Google's Gemini has issues with data limitations and obeying instructions
The struggle is getting high-quality data and expenses in developing AI
But despite these challenges, they are still working on new models and I'm hopeful we'll see them in 2025.
The solution they need to seek out for is finding better, high-quality data, and also partnering with publishers because there's way too many copyright lawsuits flying all over the place.
They should also experiment with synthetic data. However I don't believe they've hit any kind of wall, I think big things are pending
Sam Altman and others within OpenAI have countered those claims.
As stated elsewhere, notice how xAI and Elon Musk didnt state they were facing headwinds. It all comes down to compute to a degree and they are constrained.
I absolutely agree with you brother, and that's why I had to say they're rumors which I doubt. I watched an interview of Sam Altman countering it. I believe there's going to be bigger changes coming in
Training is still on going and I'm awaiting the arrival of Orion or Gpt 5.
When do you think it'll no longer be Generative Pre-trained again?
No idea. I do expect a major jump in the next generation of models, to the point where people are blow away by the difference. Some will take a bit longer.
People also have to realize a delay might mean 6 months. Many are acting like it will take years. It will not.
hahaha 6 months delay I'll take my chances with that. Thanks for the info brother
That is the nature of generative AI progress. It isnt measured in years but months.
We get anxious but progress is being made daily. For me, LeoAI cannot get here soon enough.
I agree. It is crucial to get that going as soon as we can. We need to get people involved.
This starts with the feeding of data.
Super Micro Computer could be headed down a path to getting kicked off the Nasdaq as soon as Monday.
That's the potential fate for the server company if it fails to file a viable plan for becoming compliant with Nasdaq regulations. Super Micro is late in filing its 2024 year-end report with the SEC, and has yet to replace its accounting firm. Many investors were expecting clarity from Super Micro when the company reported preliminary quarterly results last week. But they didn't get it.
does the regulations they need to comply with involve losing more money, maybe that could be the reason.
There are levels exchanges have that must be maintained. If companies fall below that level, they risk being delisted.
The primary component of that plan is how and when Super Micro will file its 2024 year-end report with the Securities and Exchange Commission, and why it was late. That report is something many expected would be filed alongside the company's June fourth-quarter earnings but was not.
The Nasdaq delisting process represents a crossroads for Super Micro, which has been one of the primary beneficiaries of the artificial intelligence boom due to its longstanding relationship with Nvidia and surging demand for the chipmaker's graphics processing units.
The one-time AI darling is reeling after a stretch of bad news. After Super Micro failed to file its annual report over the summer, activist short seller Hindenburg Research targeted the company in August, alleging accounting fraud and export control issues. The company's auditor, Ernst & Young, stepped down in October, and Super Micro said last week that it was still trying to find a new one.
T-Mobile's network was among the systems hacked in a damaging Chinese cyber-espionage operation that gained entry into multiple U.S. and international telecommunications companies, The Wall Street Journal reported on Friday citing people familiar with the matter.
Hackers linked to a Chinese intelligence agency were able to breach T-Mobile as part of a months-long campaign to spy on the cellphone communications of high-value intelligence targets, the Journal added, without saying when the attack took place.
"T-Mobile is closely monitoring this industry-wide attack," a company spokesperson told Reuters in an email.
"At this time, T-Mobile systems and data have not been impacted in any significant way, and we have no evidence of impacts to customer information."
It was unclear what information, if any, was taken about T-Mobile customers' calls and communications records, according to the WSJ report.
With Trump as the president the response from America could be a severe one. It surprises me the amount of intelligence the Chinese have enough to hack such companies. Do you think there's an inside man?
Newly unsealed documents brought by a WhatsApp lawsuit shows NSO Group's spyware, Pegasus, was used to hack as many as "tens of thousands” of devices.
On Thursday, WhatsApp scored a legal victory by convincing a U.S. federal judge to publicly release three court documents that include new revelations about the inner workings of Pegasus, the spyware made by Israeli surveillance tech maker NSO Group.
The newly unsealed documents include information coming from depositions of NSO employees during the legal proceedings, internal company documents, as well as — ironically — WhatsApp messages exchanged between NSO employees, which WhatsApp obtained by sending subpoenas to NSO.
The documents also reveal that NSO disconnected 10 government customers in recent years from accessing the Pegasus spyware, citing abuse of its service.
This release of new revelations is the latest development in the lawsuit that WhatsApp filed in 2019, accusing NSO of violating the anti-hacking law, the Computer Fraud and Abuse Act, and breaching WhatsApp’s terms of service, by accessing WhatsApp servers and targeting individual users with spyware sent over the chat app. The accusations are based on a series of cyberattacks against WhatsApp users, including journalists, dissidents, and human rights advocates.
“The evidence unveiled shows exactly how NSO’s operations violated U.S. law and launched their cyber-attacks against journalists, human rights activists and civil society,” WhatsApp spokesperson Zade Alsawah said in a statement sent to TechCrunch. “We are going to continue working to hold NSO accountable and protect our users.”
At first I use to ask myself why anyone would hack a system to get information but when I see the how much data is becoming important in our times I know anyone would do anything to get their hand on some secret data
Information was always valuable. That is the heart of corporate or government espionage. People were always trying to steal each other's secrets.
So let me ask, now that AI generates mass data for self training, will people be stealing this data 🤔 and who are they going to sell that too?
Which data do you think is more valuable, quality Human data or quality AI regenerated data
What data are you referring to? The data on Hive is public, anyone can use it simply by setting up an API.
Human data is the most beneficial but humans interacting with synthetic data is helpful also. The value of synthetic data, long term, is hotly debated. Nobody knows if it degrades as it is fed through repeatedly into model training.
That is why I tell people to interact with what is posted, even if synthetic. The responses how to generate context.
BTC ATH
Bitcoin has hit $93K breaking another ATH record. Right now, those who Invested are taking about $5B in profits daily. Long term holders of BTC always rule but is this the market peak or will it get higher than anyone expected
While Mark Cuban and other sore losers are leaving X to shout into the void, several major advertisers have returned to the platform.
Comcast, IBM, Disney, Warner Brothers, Discovery and Lionsgate Entertainment have all resumed ad spending on the social media giant - albeit this is more of a toe-dip than a full recommitment. According to Adweek, the brands collectively spent less than $3.3 million on X from January to September 2024, a far cry from the $170 million spent during the same period in 2023.
Either way, it's an admission that pulling ad spend over 'hate speech' and 'antisemitism' was nothing more than a giant virtue signal, particularly considering Facebook and Instagram's long history of providing a safe forum for child sexual abuse.
While a global survey by Kantar of senior marketers across 20 countries found that 26% of them plan to cut spending on X in 2025, the 2024 election may have changed that.
"X’s owner now has the ear of the president-elect, a man who has a long history of helping his friends, and punishing his enemies," said Max Willens, senior analyst at Emarketer. "Sending at least a trickle of ad spending toward X may be seen as good for business, albeit in an indirect way."
Advertising Cartel Under Fire
Speaking of the tide turning, the woke cabal of advertisers trying to starve conservative platforms out of a voice is now coming under fire (have we mentioned lately that we really appreciate our premium subscribers?).
In a Wednesday letter to Microsoft, Alphabet (Google), Apple, and Meta, FCC Commissioner Brendan Carr accused them of having "participated in a censorship cartel that included not only technology and social media companies but advertising, marketing, and so-called "fact-checking" organizations as well as the Biden-Harris Administration itself."
"The relevant conduct extended from removing or blocking social media posts to suppress their information and viewpoints, including through efforts to delist them, lower their rankings, or harm their profitability."
Carr then suggested that their protection from liability under Section 230 may be on the line.
"As you know, Big Tech's prized liability shield, Section 230, is codified in the Communications Act, which the FCC administers. As relevant here, Section 230 only confers benefits on Big Tech companies when they operate, in the words of the statute, "in good faith."
Wow...
Carr then set his sights on NewsGuard - which Jonathan Turley notes has been long accused by conservatives "of targeting conservative and libertarian sites and carrying out the agenda of its co-founder Steven Brill. Conversely, many media outlets have heralded his efforts to identify disinformation sites for advertisers and agencies."
Basically, NewsGuard bombards conservative sites with struggle-session questionnaire emails demanding explanations for the slightest of indiscretions, after which they issue a "report card" that advertisers use to justify pulling ad spend.
As Carr notes in the letter; "It is in this context that I am writing to obtain information about your work with the one specific organization - the Orwellian named NewsGuard. As exposed by the Twitter Files, NewsGuard is a for-profit company that operates as part of the broader censorship cartel. Indeed, NewsGuard bills itself as the Internet's arbiter of truth or, as its co-founder put it, a "Vaccine Against Misinformation." Newsguard purports to rate the credibility of news and information outlets and tells readers and advertisers which outlets they can trust."
Carr suggests following NewsGuard's ratings may constitute a violation of Section 230 (this is huge).
"NewsGuard's own track record raises questions about whether relying on the organization's products would constitute "good faith" actions within the meaning of Section 230. For one, reports indicate that NewsGuard has consistently rated official propaganda from the Communist Party of China as more credible than American publications."
"For another, NewsGuard aggressively fact checked and penalized websites that reported on the COVID-19 lab leak theory."
Carr then demands the following information:
that's less than 10 percent spending on the platform. I believe this move is to make sure they're in Elon's good books because a simple government regulation and they could be done for.
Marc Andreessen believes AI will need more human help soon. So there'll be a boom in hiring for programmers, doctors and lawyers to guide its growth with their data.
What do you think about this Guy's? More jobs or just false predictions
Overall it will end up killing jobs. What he is referring to is the scaling of the industry. These people are going to need to be hired to helps train the AI.
Then this is just a temporary hiring, so that means job ending is inevitable.
Elon Musk's artificial intelligence company xAI is raising up to $6 billion at a $50 billion valuation, according to CNBC's David Faber.
Sources told Faber that the funding, which should close early next week, is a combination of $5 billion expected from sovereign funds in the Middle East and $1 billion from other investors, some of whom may want to re-up their investments.
The money will be used to acquire 100,000 Nvidia chips, per sources familiar with the situation. Tesla's Full Self Driving is expected to rely on the new Memphis supercomputer.
Musk's AI startup, which he announced in July 2023, seeks to "understand the true nature of the universe," according to its website. Last November, xAI released a chatbot called Grok, which the company said was modeled after "The Hitchhiker's Guide to the Galaxy." The chatbot debuted with two months of training and had real-time knowledge of the internet, the company claimed at the time.
A spokesperson told CNBC the company "intends to take all necessary steps to achieve compliance with the Nasdaq continued listing requirements as soon as possible."
While the delisting issue mainly affects the stock, it could also hurt Super Micro's reputation and standing with its customers, who may prefer to simply avoid the drama and buy AI servers from rivals such as Dell or HPE.
"Given that Super Micro's accounting concerns have become more acute since Super Micro's quarter ended, its weakness could ultimately benefit Dell more in the coming quarter," Bernstein analyst Toni Sacconaghi wrote in a note this week.
yes if the delisting happen, definitely investors would get scared and not invest. But the worst would be the customers that wouldn't want to continue doing business with them
Fraud, might even be treason. Lots of rumors that they smuggled NVIDIA chips to China despite the ban.
In addition, a few years back, they were at the center of the potential chips listening in Cloud Centers...
Xiaodi Hou, the co-founder and former CEO of self-driving trucking startup TuSimple, has urged a California district court to issue a temporary restraining order to stop the company from transferring its remaining U.S. assets to China, according to a recent court filing.
Hou, who plans to apply for a temporary restraining order in December during the next scheduled court hearing, is hoping to keep TuSimple from moving tens of millions of dollars in cash to China. As of September, TuSimple had roughly $450 million in capital. Hou is also requesting expedited discovery of evidence to aid his requests for the motion.
Hou’s declaration to the court, filed on Monday, is the latest escalation in the battle between TuSimple and some of its shareholders, over the company’s attempts to use investor capital to fund a new AI-generated animation and video game business in China.
This is the first time Hou — who was ousted from his role as CEO in 2022 — has publicly accused TuSimple and its leaders of funneling assets toward animation and gaming businesses owned by or with direct ties to Mo Chen, TuSimple co-founder and chairman of the board, under the guise of a business pivot. Hou also argued the company violated SEC regulations by neither informing nor gaining approval from shareholders before changing its business direction or transferring funds to China.
Hou now heads a new autonomous trucking startup in Texas
TuSimple, once valued at $8.5 billion after its 2021 IPO, faced setbacks that led to its U.S. shutdown and delisting in January 2024. The company’s stated goal was to commercialize its AV technology in China. But as the year progressed, TuSimple slashed its workforce, ceased self-driving operations, and began hiring staff to handle jobs related to AI gaming and animation.
Shareholders sent a letter to the board in August after learning TuSimple was putting resources toward AI gaming and animation. The board responded a couple weeks later by publicly announcing the new business unit.
Os testes de voo de um KC-135 Stratotanker com copiloto de Inteligência Artificial devem começar no ano que vem. A Merlin Labs, Inc. de Boston e a 6ª Ala de Reabastecimento Aéreo da Força Aérea dos EUA na Base Aérea de MacDill, Flórida, têm testado o sistema Merlin Pilot para fornecer autonomia e automação para o avião-tanque KC-135 para reduzir a tripulação e permitir que os membros da tripulação se concentrem em tarefas críticas da missão.
Em fevereiro, a Merlin disse que havia assinado um Acordo de Pesquisa e Desenvolvimento Cooperativo de vários anos com o Comando de Mobilidade Aérea (AMC) e o Comando de Material da Força Aérea para desenvolver e integrar o Merlin Pilot no KC-135 para informar o Sistema de Reabastecimento Aéreo de Próxima Geração e “abrir caminho para operações autônomas não tripuladas do KC-135 — uma nova capacidade sem precedentes para a AMC e a USAF”.
“Este projeto conjunto da USAF e da Merlin avaliará a viabilidade de escalar o Merlin Pilot para grandes aeronaves de transporte, especialmente suas capacidades inovadoras de IA”, disse a empresa. “Atingir esse marco demonstra que os processos de engenharia de sistemas da Merlin são consistentes com os padrões de aeronavegabilidade definidos pela USAF e permite que a Merlin progrida em direção à integração da aeronave, conclusão do projeto e operações de teste”.
O Merlin Pilot é equipado com tecnologias avançadas de sensores que permitem monitorar o estado da aeronave e o ambiente ao redor enquanto guia o voo e recomenda ajustes de trajetória.
O sistema também apresenta um módulo de comunicação alimentado por algoritmos de Processamento de Linguagem Natural, permitindo interação verbal com o controle de tráfego aéreo “da mesma forma que um piloto humano faz”, afirma a empresa.
A aceitação da Força Aérea dos EUA do plano de aeronavegabilidade do Merlin Pilot no KC-135 “é o primeiro grande marco a ser executado sob esta colaboração e estabelece as bases para a base de certificação do Merlin Pilot e eventual Liberação de Voo Militar (MFR)”, disse Merlin. “Integrar o Merlin Pilot no KC-135 dá início aos programas de aeronavegabilidade da Parte 25 da Merlin e é material para avanços contínuos nesta classe de aeronave. A Merlin está mirando o final de 2024 para sua conclusão de projeto, com testes em solo, testes de voo e demonstrações a ocorrerem em 2025.”
Em junho, a empresa disse que havia recebido um contrato de US$ 105 milhões do Comando de Operações Especiais dos EUA para fornecer automação avançada para o avião de transporte C-130J da Força Aérea pela Lockheed Martin como um passo em direção a tais recursos para outras aeronaves de asa fixa das forças de operações especiais (SOF) nos próximos cinco anos.
A Merlin disse que tem uma parceria de dois anos com a USAF e que o contrato C-130J deste verão fornecerá design de automação avançado e integração no C-130J; testes em solo; revisão de prontidão de teste e teste de voo; demonstração completa da decolagem ao pouso; e integração em outras aeronaves das SOF.
Just wait until the Ukrainians see this robot...
One of Tesla's competitors in robotics is the Chinese company Unitree, which is already selling its humanoid G1 robot for $40,000. The company also sells robo-dogs on the Amazon marketplace. Another Chinese robotics company, Deep Robotics, released a new video featuring one of its robo-dogs equipped with wheels, showcasing its ability to scale hillsides and navigate off-road terrain.
Deep Robotics describes itself as a "leader in embodied AI technology innovation and application," adding it's "the first in China to achieve fully autonomous inspection of substations with quadruped robots."
Earlier this week, Deep Robotics posted a short video on YouTube featuring one of its quadruped robots with wheels. The robot's mobility is absolutely terrifying.
Public trade data compiled by counterparty and supply chain risk intelligence firm Sayari shows that Hangzhou Yunshenchu Technology Co., Ltd owns Deep Robotics.
The company said its core team members originate from "well-known universities," including Zhejiang University, Shanghai Jiao Tong University, Beijing Institute of Technology, Wuhan University, the University of Electronic Science and Technology of China, the University of Chinese Academy of Sciences, New York University, the University of Illinois at Urbana-Champaign, and the Georgia Institute of Technology.
Just wait until the Ukrainians see this robot. They might want to strap a machine gun atop this Skynet-like creature.
These Chinese... so many names for them to create and then put Skynet? LOL
There will still be a crazy person in the future who will do something stupid with AI.
The chances are of course small, but I don't rule out a "Skynet" for real life.
Musk has said this will never happen, but even he is afraid of machines creating their own consciousness.
Hard for something that is deterministic to become something that is non deterministic.
And what gets overlooked is the ability for Ai to operate counter to what other AI is doing.
Did you see this?
I liked the idea to combat scammers hahaha...
Great use for AI in this case.
That is what will happen. A lot of the challenges presented by AI will be solved by AI.
This will be very important. I hope that more similar things can emerge for our protection and safety.
How strange... no matter how many times I refresh the page, it doesn't translate. I'll have to open it on my laptop.
A China enviou, nesta sexta-feira (15), amostras de tijolos ao espaço em um foguete, para submetê-las a testes e determinar se seria possível construir uma base na Lua fabricando outros tijolos com o próprio solo lunar.
O foguete de carga decolou no fim da noite de sexta-feira, pelo horário local, rumo à estação espacial chinesa Tiangong, que orbita entre 400 e 450 km da Terra, no contexto da missão de enviar seres humanos à Lua até 2030 e construir uma base permanente no satélite em 2035.
A China investiu bilhões de dólares em seu programa espacial nas últimas décadas, com o objetivo de alcançar Estados Unidos e Rússia.
Várias amostras de tijolos de diferentes composições serão submetidas a condições extremas, semelhantes às da Lua.
"O principal objetivo é expô-los ao espaço", declarou à AFP Zhou Cheng, professor da Universidade de Ciência e Tecnologia de Huazhong, em Wuhan (centro), cujo time de pesquisadores desenvolveu os tijolos.
"Colocaremos os tijolos fora da estação espacial e os deixaremos expostos aos elementos para ver se o desempenho deles se deteriora ou não", explicou.
Qualquer material na Lua terá que suportar condições extremas, começando pelas drásticas variações de temperatura, que podem ir de -190 ºC a +180 ºC.
Além disso, a Lua não possui uma atmosfera para protegê-la, por isso recebe grandes quantidades de radiação cósmica e micrometeoritos. Os sismos lunares também podem enfraquecer as estruturas construídas sobre o solo lunar.
When reality becomes stranger than satire, maybe the satirists can teach us something. Or, maybe the last laugh will be on them after all. The Onion said on Thursday that its parent company bought Infowars, the disgraced purveyor of Sandy Hook misinformation and vendor of pseudoscience supplements. The Onion posted on Bluesky that it plans to transform the rebooted Infowars into “a very funny, very stupid website.” However, the Texas judge overseeing the bankruptcy sale temporarily halted the takeover, citing concerns about the auction process. A review hearing will be scheduled for next week.
The Onion said it received the blessing of the families of the victims of the Sandy Hook Elementary School shooting to scoop up Infowars in a bankruptcy auction. Everytown for Gun Safety, a nonprofit founded in the massacre’s aftermath, reportedly plans to advertise on the rebooted site if the sale is finalized.
Infowars founder Alex Jones was found liable in 2022 for nearly $1.5 billion in damages for spreading conspiracy theories about the 2012 shooting that killed 20 children and six adult staffers.
After The Onion’s triumphant announcement on Thursday, the AP reported that US Bankruptcy Judge Christopher Lopez called for an evidentiary hearing to review the auction that led to the takeover. Christopher Murray, the trustee overseeing the auction, reportedly said in court that The Onion’s parent company, Global Tetrahedron, didn’t offer the highest bid in cash. However, the sale price included a clause where some Sandy Hook families would forego their portion of the proceeds to pay Jones’ other creditors. Murray said Global Tetrahedron’s bid was the best despite having a lower (undisclosed) cash amount due to that agreement.
Clássico do fim de ano, o comercial de Natal da Coca-Cola parece ter chegado com um gosto amargo para boa parte dos consumidores em 2024. Neste ano, o popular vídeo da campanha foi feito com o uso de IA Generativa.
De acordo com o Independent, a recriação de 15 segundos do famoso anúncio “Holidays are Coming” é o primeiro uso de IA generativa pela empresa em uma campanha de Natal na TV. Veja o resultado na publicação abaixo:
Após a divulgação, muitos usuários usaram as redes sociais para demonstrar insatisfação com o uso de IA Generativa. Alguns consumidores criticaram o clipe da marca de bebidas, com comentários como “lixo”, “feio” e “preguiçoso”.
This is the future. A lot of people hated the ad but it will only get batter with time.
Well, these bitter people have no idea that the future has arrived and a new era is before our eyes.
I just wonder what these videos will be like in 5 or 10 years.
They will be as good or better than the ones we see today. Video is a harder media to deal with but we are seeing improvement.
Remember image generation about a year ago? Notice how much that improved.
Yes, the change was that significant. The cool thing is that the trend is to improve even more.
I also remember videos made by AI in the past. Hands and heads were disproportionate to the body and look how we are today.
Things will actually get even better.
The Will Smith eating spaghetti a few years ago was classic. It also shows how poor the technology was.
Give it another year and online ads generated by AI will be commonplace.
Yes it is true. I think this is already a classic.
Now the "job" of some people from now on is to try to figure out whether or not a video was made by AI
In a few years, it will not matter.
Do you figure out of a calculation of a number online was done by a computer or an individual?
Is Khal or someone from the Leo team counting up the votes on each post and changing the number or does the computer do it?
Javier Meza, diretor de marketing da Coca-Cola para a UE, disse que a marca estava se adaptando aos “tempos de hoje” ao incorporar IA.
A Administração Nacional da Aeronáutica e Espaço dos Estados Unidos (Nasa) declarou que foi concluída, com sucesso, a integração de uma peça central — coronógrafo — no Transportador de Instrumentos Roman. A tecnologia foi planejada para fazer observações diretas de exoplanetas — mundos com muitas estrelas orbitando ao redor — ou planetas fora do sistema solar. O processo faz parte da construção do telescópio Nancy Grace Roman, próxima missão astrofísica da Nasa. O lançamento está previsto para maio de 2027.
O telescópio utilizará de um conjunto complexo de máscaras e espelhos ativos, com a finalidade de bloquear a luz das estrelas, facilitando a detecção da luz de planetas que estão além do sistema solar onde está a Terra. De acordo com a Nasa, será possível detectar planetas com brilho até 100 milhões de vezes mais fraco que das estrelas que orbitam ao redor.
O coronógrafo — instrumento projetado para bloquear a luz — tem a medida de 1,7 metro de largura, sendo comparado com um piano de cauda. Cientistas compartilham a expectativa de que o instrumento possa ser um aliado poderoso na detecção direta dos planetas, possibilitando conseguir imagens diretas de forma mais eficiente do que as obtidas por telescópios em solo.
"Para ir de onde estamos para onde queremos estar, precisamos do Coronógrafo Roman", disse Rob Zellem, cientista do projeto do Telescópio Espacial Roman, em declaração. "Aplicaremos essas lições aprendidas à próxima geração de missões emblemáticas da Nasa, que serão explicitamente projetadas para procurar planetas semelhantes à Terra."
O equipamento foi projetado para operar em temperatura ambiente, sendo o isolamento algo importante para manter o instrumento na temperatura certa enquanto estiver no vácuo frio do espaço — o que vai permitir um limite adicional para bloquear a luz que poderia obscurecer as observações.
NASA elevated the air leak to the highest level of risk, but Russia isn't convinced it's that serious.
For the past five years, air has been escaping through a Russian section of the International Space Station (ISS) at an increasing rate. NASA and its Russian counterpart, Roscosmos, are still in disagreement over the root cause of the leak, as well as the severity of the consequences.
The leak was first discovered in 2019 in the vestibule (named PrK) that connects a docking port to the Russian Zvezda module, which Roscosmos had launched to low Earth orbit in July 2000. Earlier this year, NASA elevated the leak to the highest level of risk as the rate of air escaping from the module had doubled from one pound of air per day to a little over two pounds.
“While the Russian team continues to search for and seal the leaks, it does not believe catastrophic disintegration of the PrK is realistic,” Bob Cabana, a former NASA astronaut who now chairs the ISS Advisory Committee, said during a meeting on Wednesday, SpaceNews reported. “NASA has expressed concerns about the structural integrity of the PrK and the possibility of a catastrophic failure.”
“The Russians believe that continued operations are safe but they can’t prove to our satisfaction that they are, and the U.S. believes that it’s not safe but we can’t prove to the Russians’ satisfaction that that’s the case,” he added.
Russian teams believe the air leak was likely caused by high cyclic fatigue from micro vibrations, while teams at NASA think pressure and mechanical stress, residual stress, material properties of the module, and environmental exposure are all at play, according to SpaceNews.
If they don't handle this fast enough space issues are a deadly one. I pray for Space missions always because one mistake and the whole earth cm suffer the consequences
Para muitos donos de console uma regra é certa: nada de comida por perto. Mas, pelo visto, essa regra não é universal. Prova viva é que a Pizza Hut lançou uma ideia de um gadget que pode ser impresso por uma impressora 3D que tem a função de manter a pizza quente usando o PS5 como “forno”.
A novidade, que foi nomeada de PIZZAWRMR (que vem de “warmer”, ou seja, “mais quente” em inglês) foi projetada para ser acoplada em cima do console da Sony e usar o ar quente que sai do aparelho para esquentar o alimento. Saiba mais!
India's rover on the moon found sulfur and other important elements at the South Pole. This will help us learn more about the moon's history and future use.
https://img.inleo.io/DQmNpNoTLLvPVVqVavsE2CBTBx1dWwiyMo7NK39qfCmD6LL/resize.webp
The discoveries we are going to make over the next decade will blow our mind.
By 2040, what we are living in today will seem like the Stone Ages.
Hahahah stone age, yes . When those in the 60s come to today they'd be surprised how much the world has changed. But technology is accelerating so in just 10 years we'll feel the same
Tens of thousands of Netflix users reported issues accessing the streaming service prior to the long-awaited showdown between boxing legend Mike Tyson and YouTuber-turned-boxer Jake Paul.
Customers began reporting issues at around 7 p.m. CT, according to the website Down Detector, which tracks online service outages. Reports of problems skyrocketed at around 9:46 p.m., when roughly 97,000 reports had been received.
At 9:26 p.m., 69,000 users reported issues accessing the streaming service. NBC Chicago contacted Netflix for a response to the troubles and was told, in part, "Nothing to comment on at this time..."
Tyson retired with a 50-6 record and 44 knockouts after losing to Kevin McBride 19 years ago. Paul debuted as a pro boxer about four years ago and is 10-1 with seven knockouts fighting mostly mixed martial artists and journeymen boxers.
The fight was originally scheduled for July 20 but had to be postponed when Tyson was treated for a stomach ulcer after falling ill on a flight. In a documentary chronicling the preparations for the fight, Tyson said he lost 26 pounds in the process of recovering.
During a press conference Wednesday, before the fight, Tyson had terse answers for all the questions asked about the fight asked Paul.
Yes yes Yes this was a big deal, I wonder how this could have ever happened. I thought Netflix live streaming servers was strong enough to handle that number.
No idea. I dont have Netflix but it is the growing pains for some of these platforms.
On the topic of Netflix, I think soon we should hope to see Web 3 Netflix right
We need to work on creating it. These things do not build themselves.
very true maybe after a while being part of this community I'll try and see if this could be my mission for the platform. But I would need experience first and as you say it doesn't build itself I'll have to do it
Yep. We just need to keep focus on doing all we can in a decentralized manner. The fact that you are adding information and data here is a good first step. It is what allows us to democratize data.
We are feeding the database to train LeoAI. Once that is released, then we can start to use it, a non Big Tech chatbot.
ooo I like the sound of a non Big tech chatbot. Finally something not controlled or regulated by government and individual billionaires
I just can't believe they have commented on that yet or perhaps they themselves have to find out what exactly caused such a problem.
A empresa de desenvolvimento de aeronaves Electra, dos Estados Unidos, firmou contrato com a NASA para explorar conceitos de aeronaves que podem ampliar significativamente sua tecnologia até meados do século. O acordo foi fechado no âmbito da iniciativa Advanced Aircraft Concepts for Environmental Sustainability (ACCES) 2050 da NASA.
A Electra vem testando um demonstrador de dois assentos, o EL-2 Goldfinch, que completou seu primeiro voo de decolagem e pouso ultracurto em maio, afirmando que a aeronave pode decolar em distâncias de até 46 metros, o que potencialmente permite operações em locais pouco acessíveis.
Para o projeto com a NASA, a Electra colabora com grandes parceiros, incluindo American Airlines, Honeywell Aerospace Technologies, Lockheed Martin Skunk Works, o Instituto de Tecnologia de Massachusetts (MIT) e a Universidade de Michigan. Juntos, exploram aplicações mais amplas da tecnologia de baixa emissão.
O CEO da Electra, Marc Allen, relatou ao FlightGlobal que a empresa está intensamente focada no desenvolvimento de uma aeronave de nove assentos com capacidade de decolagem e pouso ultracurto. No entanto, ele acredita que a tecnologia da Electra pode ser adaptada para aplicações em companhias aéreas comerciais.
Allen mencionou que a empresa está trabalhando para entender como as mudanças na densidade de energia das baterias poderiam impactar em sistemas de propulsão elétrico-híbridos distribuídos em uma aeronave de 200 assentos.
Em setembro, a empresa concluiu uma série de voos de demonstração em instalações militares na Virgínia como parte do programa AFWERX Agility Prime da Força Aérea dos EUA.
Allen enfatizou a importância de um modelo de negócios viável e de um quadro regulatório que suportem o desenvolvimento tecnológico. Ele destacou que, na categoria de nove assentos, os valores de emissões, ruído, custo e acesso são fundamentais para criar um modelo de negócio eficaz, permitindo penetrar e potencialmente expandir mercados existentes.
No entanto, ele mencionou que, embora o futuro possa trazer novos conceitos, a ênfase atual permanece no design revolucionário de nove assentos com lift blown, adiando decisões sobre possíveis extensões de capacidade até que sejam mais evidentes as necessidades do mercado.
There's a new AI that can find diseases in tissue images much faster than doctors can.
It helps doctors spot cancer and other illnesses quickly, saving time and improving accuracy. These are the benefits of AI and that's why we embrace it
AI and healthcare presents huge opportunities. Not only are we looking at the enhancement of care, hopefully saving lives, but also being able to reduce the cost of things a great deal.
Now that's an aspect I wasn't looking into, yes the cost of healthcare will also reduce significantly. For lots of situations it's not the cure but to afford it so I'm looking forward to AI in healthcare
This from VeniceAI:
An AI factory is a hypothetical organization that would design, develop, and deploy artificial intelligence (AI) systems at scale, similar to how a traditional manufacturing factory produces physical goods. Here's a more detailed breakdown of the concept:
Despite these challenges, the development of AI factories presents numerous opportunities, including:
In summary, an AI factory is a hypothetical organization that would design, develop, and deploy artificial intelligence systems at scale, similar to how a traditional manufacturing factory produces physical goods. AI factories would prioritize modularity, standardization, scalability, collaboration, and continuous learning to develop and deploy effective AI systems.
Key Components:
Modular Design:
AI factories would break down complex AI systems into smaller, modular components, making it easier to develop, test, and deploy individual parts. This modular design would allow for greater flexibility, scalability, and reuse of AI components.
Standardization:
AI factories would standardize AI systems and frameworks, reducing the need for customization and increasing the speed of development. Standardization would enable easier integration of AI components, reduced costs, and improved efficiency.
Scalability:
AI factories would be designed to handle large volumes of data and scale up or down depending on the needs of the organization. This scalability would enable AI factories to adapt to changing business requirements and emerging trends.
Collaboration:
AI factories would foster collaboration between data scientists, software engineers, and domain experts to ensure that AI systems meet the needs of the organization. Collaboration would lead to better-informed AI development, improved system performance, and increased adoption across industries.
Continuous Learning:
AI factories would prioritize continuous learning and improvement, using data and feedback to refine and update AI systems over time. Continuous learning would enable AI factories to stay up-to-date with emerging technologies, improve system performance, and address changing business requirements.
Industry-Specific Focus:
AI factories would focus on specific industries or use cases, such as healthcare, finance, or transportation, to develop AI systems tailored to those domains. Industry-specific focus would enable AI factories to address unique challenges and opportunities, leading to more effective and efficient AI solutions.
AI Factory Business model:
AI factories could operate on a variety of business models, including:
Challenges and Opportunities:
The development of AI factories poses several challenges, including:
I finally understand what it is. I think by the time we start seeing AI factories dominate, that means traditional jobs are gone because AI would have dominated
NFTs became a punchline. But some people are taking another look.
McDonald’s will soon feature cartoon animation-style characters on its McCafe coffee cups for a limited time. That’s not particularly odd. But in this case, the animated characters are from Doodles, the web3 project. And it’s just the latest sign that some people are trying to bring back the NFT fad after a very brief but loud explosion in the tech scene a few years ago.
Remember non-fungible tokens? When NFTs went mainstream in the winter of 2021-22, everyone wanted to get in on the hype. Jimmy Fallon and Paris Hilton bragged about their Bored Apes on late-night TV, Budweiser launched Dwayne Wade NFTs to promote its no-alcoholic beer, and Starbucks launched a marketplace for NFT stamps.
But it didn’t last long. Fallon’s Bored Apes promotion in Jan. 2022 was the beginning of the end. By Sept. 2022, trading volume for NFTs had fallen 97%. Doodles, which started as an NFT project, was declaring by early 2023 that it was no longer just an NFT project. And now Doodles is partnering with McDonald’s for a coffee cup promotion and Doodle NFTs are soaring in price, according to CoinTelegraph.
The X account for McDonald’s shared a teaser on Tuesday which included a short 7-second video with the numbers 11/18, meaning Nov. 18.
It’s perhaps telling that Doodles ends its tweet thread with a warning about scammers.
“this is the last post in this thread from the official @doodles account beware of impersonator accounts & phishing links remember, DO NOT click on links that appear to be from Doodles before cross-referencing with http://doodles.app,” the tweet reads.
Elon Musk & Vivek Ramaswamy will lead a new government department to cut waste, lower spending and reduce unnecessary rules by 2026. We finally have tech billionaire working directly with government. Just like Johnny English three villain
They can only recommend so they wield no power.
In the end, I dont think much gets done.
To be honest, I didn't see anything great about the Coca-Cola commercial that was made by AI.
The people who complained and even said that it was "ugly and lazy" certainly need to understand that times have changed and the company is just adapting to the current moment with the use of AI.
https://inleo.io/threads/view/coyotelation/re-taskmaster4450le-kplrjier?referral=coyotelation
It isnt too impressive but it is a sign of the future.
The people complaining show how little they understand how things are going. In 6 months, the capabilities will be much greater. By next Christmas, we might find that most of the commercials on social media are AI generated.
Yes, definitely and I don't doubt it.
The problem is the ignorance that plagues these people.
Ugly is ugly. The fact that it is made with AI is beside the point. I'm sure at some point, probably soon, they'll be able to make beautiful AI commercials. But this isn't one.
Instead of increasing the physical data transfer rate beyond the 23 Gbps offered by Wi-Fi 7, the next-generation Wi-Fi 8—based on IEEE's 802.11bn Ultra High Reliability (UHR) specification—will focus on improving connection reliability and user experience.
Traditionally, new Wi-Fi iterations (as specified by IEEE 802.11 standards) have focused on maximizing data transfer rates by increasing channel bandwidth and number of channels and introducing new modulation methods. With Wi-Fi 7, the maximum PHY rate is 23 Gbps, though nobody expects to hit speeds that high. Also, the reliability of high-speed Wi-Fi connections leaves much to be desired. To that end, the next-generation Wi-Fi 8 iteration will not increase theoretical speed but will introduce new features designed to improve real-world performance and boost connection reliability, reports PC World, citing a MediaTek whitepaper.
On a high level, Wi-Fi 8 (802.11bn) resembles Wi-Fi 7 (802.11be): it uses 2, 4, 5, and 6GHz bands, the same modulation (4096 QAM), eight spatial streams, MU-MIMO, multiple OFDMA, and a maximum channel bandwidth of 320MHz.
However, according to the MediaTek paper, the new spec introduces several key features designed to improve real-world performance and connection speeds: Coordinated Spatial Reuse (Co-SR), Coordinated Beamforming (Co-BF), Dynamic Sub-Channel Operation (DSO), and enhanced Modulation Coding Scheme (MCS). Remember that we are talking about the standard as MediaTek sees it. Some features could be mandatory, while others could end up being optional.
Scammers are becoming increasingly sophisticated, turning to artificial intelligence to better con their victims out of money. This includes using deepfakes to present themselves as someone else. Now, AI is being used in the fight back with telecom company O2 deploying an AI-powered granny in the battle.
Named Daisy, it is a new AI tool with the voice of a grandmother designed to talk with fraudsters and "waste as much of their time as possible". Basically, she rambles on about anything and everything to keep them away from real people.
According to O2 67% of British people are worried about falling victim to fraud and a quarter experience some degree of fraud every week. Daisy gives them a way to fight back and it has kept scammers on the phone for up to 40 minutes at a time.
Daisy has taken scammers on "meandering stories of her family, talked at length about her passion for knitting and provided exasperated callers with false personal information including made-up bank details."
Daisy was built by the team at O2 using a custom-trained large language model with a 'character personality layer' to produce personalized responses.
It listens to the caller, transcribes it into text, sends that the LLM which generates the response and sends it back to the caller using text-to-speech. This is similar to the way Google Gemini Live works or the earlier version of ChatGPT Voice.
If you've ever had a conversation with Gemini Live or Meta AI Voice the experience will be fairly similar. It happens in real time with no noticeable delay.
A inteligência artificial é a próxima revolução industrial, destacou o presidente da Microsoft, Brad Smith, no Web Summit Lisboa 2024 nesta terça-feira (12). Durante a palestra, o executivo classificou a IA como o “próximo grande GPT”, ou seja, o grande avanço em tecnologias de propósito geral.
Para Smith, a inteligência artificial é um eixo de mudança econômica e social, comparando-a com outras tecnologias que mudaram os rumos da sociedade, como a eletricidade e o vapor.
“Essa quarta revolução industrial se assemelha mais à segunda”, comparou o executivo, ao destacar o impacto da distribuição da eletricidade, que levou prosperidade aos país. Para ele, a IA pode seguir pelo mesmo caminho.
“Temos muito a aprender com as lições do passado, incluindo a segunda revolução industrial. Porque o que você vê, por exemplo, é que onde a eletricidade foi, a prosperidade seguiu”, comentou.
“Devemos começar reconhecendo uma coisa, talvez acima de tudo o que conversaremos hoje: a IA é o próximo grande GPT. É o próximo grande avanço em tecnologias de propósito geral.”
No último ano, o YouTube tem buscado inovar o cenário musical com o Dream Track, uma ferramenta que utiliza inteligência artificial para criar músicas curtas. Inicialmente testada nos Estados Unidos, a solução tem parceiros renomados como Charli XCX, Demi Lovato, John Legend e Sia, que emprestam suas vozes para os projetos musicais. Essa tecnologia permite a criação de faixas de até 30 segundos que podem ser usadas em vídeos do YouTube Shorts.
Recentemente, o YouTube adicionou à ferramenta a capacidade de remixar músicas. Esta funcionalidade, ainda em fase de teste com um grupo seleto de criadores, promete diversificar a interação dos usuários ao possibilitar a edição do estilo musical de uma faixa já existente. Esta inovação segue em linha com as tendências de personalização e adaptação de conteúdo na era digital.
A nova funcionalidade chamada “Restyle a Track” oferece aos criadores a possibilidade de alterar o gênero ou a atmosfera de uma música. O usuário apenas precisa descrever seu objetivo em um prompt, e a ferramenta utiliza IA para transformar a faixa, mantendo a essência vocal e lírica original. Esta capacidade de modificação rápida e acessível representa um avanço significativo em como a música é produzida e consumida online.
Part 1/4:
In the world of software security and bug hunting, there's a tool that has made the process remarkably accessible – the CH341A. This unassuming device, costing a mere $10 or so, has the power to extract the firmware from a wide range of electronic devices, opening up a world of possibilities for security researchers and curious minds alike.
The focus of this article is the Linksys E5400, a common and affordable Wi-Fi router used in many households. This router, like many others, routes all internet traffic through the device, making it a potential target for malicious actors. By using the CH341A, we can delve into the inner workings of the router and uncover any vulnerabilities that may be lurking within.
[...]
Part 2/4:
The process begins by identifying the key components of the router – the main CPU and the SPI (Serial Peripheral Interface) flash chip. The SPI flash chip contains the firmware that runs on the CPU, and by extracting this firmware, we can analyze the code and identify potential security issues.
The CH341A allows us to read the SPI flash chip without the need for desoldering, a process that can be risky and potentially damage the device. By clipping the CH341A onto the exposed pins of the SPI flash chip, we can use a program like Flashrom to communicate with the chip and extract the firmware.
Once the firmware is extracted, we can use tools like Binwalk to dissect the file, uncovering hidden gems such as the U-Boot bootloader and the root file system. This gives us access to the actual code running on the router, enabling us to dive deep into the software and search for vulnerabilities.
[...]
Part 3/4:
The ethical implications of this process are an important consideration. While it is generally legal in the United States to reverse-engineer devices for the purpose of security research, it's crucial to tread carefully and avoid any actions that could be considered trade secret violations or malicious exploitation. The responsible approach is to report any discovered vulnerabilities to the vendor, potentially earning a bug bounty, rather than publicly disclosing them.
For those new to the world of bug hunting, the CH341A provides an excellent starting point. By targeting devices with known vulnerabilities, you can gain hands-on experience in the process of firmware extraction and analysis, without the risk of discovering uncharted territory.
[...]
Part 4/4:
In conclusion, the CH341A is a powerful tool that has democratized the world of embedded device security. By empowering individuals to explore the inner workings of the electronics they own, it has opened up new avenues for security research, bug hunting, and a deeper understanding of the technology that surrounds us. As we continue to rely on these devices in our daily lives, the importance of understanding their security posture only grows, and tools like the CH341A are at the forefront of this crucial endeavor.
Part 1/5:
In a recent blog post about the Windows Endpoint Security Ecosystem Summit, Microsoft revealed its commitment to providing more security capabilities to solution providers outside of Kernel mode. This move has significant implications for the future of gaming on both Windows and Linux platforms.
The catalyst for this change was the massive outage caused by a faulty update from the security company Crowdstrike. When Crowdstrike's Falcon software, which runs in Kernel mode, shipped a problematic update, it brought the global economy to a standstill for nearly eight hours. This incident has prompted Microsoft to rethink its approach to security in the Windows ecosystem.
[...]
Part 2/5:
The debate around anti-cheat solutions in the gaming industry has been a contentious one. Developers have primarily relied on two options: server-side anti-cheat and client-side anti-cheat. Client-side anti-cheat, which requires privileged access to the user's computer, has become increasingly controversial due to the security risks associated with Kernel-level code.
As a Kernel-level developer with over a decade of experience, one Reddit user eloquently explained the dangers of Kernel-level code. They noted that Kernel-level code has "free access to the internal data structures of the kernel" and can "do basically whatever [it] wants to [the] system." This level of control poses a significant security risk, as a vulnerability in Kernel-level anti-cheat software could have catastrophic consequences.
[...]
Part 3/5:
Microsoft's plan to move security features out of the Kernel has several implications for the future of gaming. Firstly, it would make anti-cheat software much less intrusive, as it would have to be implemented with user-level access. This, in turn, would make it easier to emulate with translation layers like Wine or Valve's Proton, potentially paving the way for improved Linux gaming support.
Additionally, Microsoft is exploring the idea of providing standardized security sensors within the Windows Kernel, which game developers could leverage instead of creating their own custom Kernel-level anti-cheat solutions. This approach would help reduce the overall amount of Kernel-level code and the associated security risks.
[...]
Part 4/5:
While this move by Microsoft may address some of the concerns around Kernel-level anti-cheat, there are still potential challenges to overcome. For example, if Microsoft's security sensors do not meet the needs of game developers, they may still seek alternative solutions that could potentially undermine the security benefits of this new approach.
Nonetheless, the trend towards moving security features out of the Kernel is a positive step for the gaming industry. It not only addresses the security risks associated with Kernel-level code but also has the potential to improve the viability of gaming on Linux platforms, which have traditionally been viewed as less secure or less suitable for gaming.
[...]
Part 5/5:
As the gaming industry continues to evolve, it will be crucial for developers and platform providers to prioritize security and user trust. Microsoft's plan to eliminate Kernel-level anti-cheat solutions is a significant step in that direction, and it will be interesting to see how the industry responds and adapts to this changing landscape.
Part 1/4:
In the world of gaming consoles, the PlayStation 4 (PS4) stands as a beloved and well-known device. However, a recent discovery has shaken the gaming community – a new jailbreak has been found that takes advantage of a vulnerability that has been publicly known since 2006.
This vulnerability, discovered in the PS4's Point-to-Point Protocol over Ethernet (PPPoE) implementation, allows for a denial of service or potentially remote code execution in the kernel context. The exploit, dubbed "PPPone," is a testament to the importance of understanding the software bill of materials – the comprehensive list of all the code components that make up a software system.
[...]
Part 2/4:
The vulnerability lies in a heap buffer overwrite, where a destination buffer called "buff" is allocated using the "Malo" function, and a source buffer called "p" is derived from the "H+1" value, where "H" is the header. The issue arises when the value of "P[1]" is not checked against the length value, allowing an attacker to copy an arbitrary length of data from the network into the heap.
This seemingly simple bug has far-reaching consequences. By controlling the allocation of the "Malo" function and the data that is copied into the heap, the attackers can influence the bin in the heap allocator, allowing them to trigger a copy from a larger "mbff" to a smaller "buff," resulting in an overwrite of adjacent allocations.
[...]
Part 3/4:
With this primitive in hand, the attackers can then bypass the Kernel Address Space Layout Randomization (KASLR) by leaking the address of the "PopSoftCList" object, which provides the base address of the kernel image. From there, they construct a series of "ROP gadgets" – pre-existing code snippets within the program – to make the kernel memory globally writable, allowing them to execute their own code.
The exploit's stage one involves cleaning up the corrupted linked list elements, followed by a stage two that binds a TCP server to a specific port, allowing the attackers to inject their payload. The final result is a jailbroken PS4, where the user can now run their own code and applications, effectively taking control of the device.
[...]
Part 4/4:
This case study highlights the importance of understanding the software bill of materials. Even the most secure software can harbor underlying vulnerabilities if the developers are unaware of the third-party code and libraries they are relying on. By keeping a close eye on the components that make up their software, developers can proactively address potential issues and ensure the overall security of their systems.
The PS4 jailbreak is a testament to the ingenuity and persistence of the security research community. While Sony has undoubtedly invested significant resources into securing their gaming console, the discovery of this 2006 vulnerability serves as a reminder that even the most well-protected systems can be vulnerable to exploitation. As technology continues to evolve, the importance of maintaining a comprehensive understanding of the software bill of materials will only grow, ensuring that developers can stay one step ahead of potential threats.
Part 1/4:
In the ever-evolving world of cybersecurity, another vulnerability has been discovered in a widely used piece of software - this time, it's affecting the popular web browser, Firefox, and its counterpart, the Tor browser.
The vulnerability in question is a use-after-free vulnerability, which can be a tricky concept to grasp. In simple terms, a use-after-free vulnerability occurs when a program continues to use a memory location after it has been freed, or deallocated. This can lead to unexpected behavior and, in the worst-case scenario, allow an attacker to execute malicious code on the affected system.
[...]
Part 2/4:
To illustrate this concept, the video presents a simple C code example. The code defines two structures, cat
and dog
, each with an ID and a function pointer. Two global pointers, Randy
and Frank
, are used to reference these structures. The video then demonstrates how, by freeing the Frank
pointer and then creating a new cat
object, the program can end up with a type confusion, where the Frank
pointer now points to the memory of the Randy
object. This allows an attacker to potentially control the function pointer and execute arbitrary code.
The vulnerability discovered by the research company EET specifically targets the way Firefox handles animation timelines in CSS. By crafting malicious CSS, an attacker can exploit this use-after-free vulnerability and gain remote code execution on the affected system.
[...]
Part 3/4:
The impact of this vulnerability extends beyond just Firefox, as it also affects the Tor browser, which is built on the Firefox codebase. Tor is a popular tool used to access the dark web, and any vulnerability in its underlying browser can have serious consequences for users who rely on it for privacy and security.
The video touches on the potential of Rust, a programming language designed with memory safety in mind, to address vulnerabilities like the one affecting Firefox. The Rust borrow checker, a key feature of the language, is specifically designed to prevent use-after-free vulnerabilities. This raises an interesting point about the technical debt and inertia associated with legacy codebases like the one in Firefox, which has been in development since the early 1990s.
[...]
Part 4/4:
The video suggests that while Rust offers a promising solution, the transition to memory-safe languages will not happen overnight. It highlights the need for either a gradual acknowledgment of the time required to address these issues or a more radical approach of starting from the ground up with secure coding practices.
The discovery of this zero-day vulnerability in Firefox and Tor serves as a reminder of the ongoing battle against cybersecurity threats. As software becomes increasingly complex, the need for robust security measures and a deeper understanding of memory management vulnerabilities becomes ever more crucial. The video provides a valuable insight into the technical details of use-after-free vulnerabilities and the potential role of Rust in shaping a more secure future for web browsing.
A Google-made artificial intelligence program verbally abused a student seeking help with their homework, ultimately telling her to “Please die.”
The shocking response from Google’s Gemini chatbot large language model (LLM) terrified 29-year-old Sumedha Reddy of Michigan — as it called her a “stain on the universe.”
“I wanted to throw all of my devices out the window. I hadn’t felt panic like that in a long time to be honest,” she told CBS News.
The doomsday-esque response came during a conversation over an assignment on how to solve challenges that face adults as they age.
The program’s chilling responses seemingly ripped a page — or three — from the cyberbully handbook.
“This is for you, human. You and only you. You are not special, you are not important, and you are not needed,” it spewed.
“You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.”
Like Musk’s original August complaint, it accused OpenAI and its CEO, Sam Altman, of violating contract provisions by putting profits ahead of the public good in the push to advance AI.
Billionaire entrepreneur Elon Musk expanded his lawsuit against ChatGPT maker OpenAI, adding federal antitrust and other claims and adding OpenAI’s largest financial backer Microsoft as a defendant.
Musk’s amended lawsuit, filed on Thursday night in federal court in Oakland, Calif., said Microsoft and OpenAI illegally sought to monopolize the market for generative artificial intelligence and sideline competitors.
Like Musk’s original August complaint, it accused OpenAI and its chief executive, Sam Altman, of violating contract provisions by putting profits ahead of the public good in the push to advance AI.
“Never before has a corporation gone from tax-exempt charity to a $157 billion for-profit, market-paralyzing gorgon — and in just eight years,” the complaint said. It seeks to void OpenAI’s license with Microsoft and force them to divest “ill-gotten” gains.
OpenAI in a statement said the latest lawsuit “is even more baseless and overreaching than the previous ones.” Microsoft declined to comment
The Taiwanese chipmaker will reportedly receive at least $1 billion of the total award this year
The Biden administration has finalized Taiwan Semiconductor Manufacturing Company’s (TSM) Chips Act incentive funding to support its U.S. fabrication sites.
The Taiwanese chipmaker signed a preliminary agreement in April for $6.6 billion in grants from the federal Chips and Science Act. The finalized award comes after the U.S. Commerce Department completed its due diligence process. TSMC is also eligible for up to $5 billion in proposed loans.
TSMC will receive at least $1 billion of the total reward this year due to already meeting some of the required benchmarks, Bloomberg reported, citing Biden administration officials. The award will be disbursed as the company’s fabs in Phoenix, Arizona complete certain project milestones.
The federal funding will go toward TSMC’s $65 billion private investment to build three state-of-the-art chipmaking facilities in Phoenix. TSMC already has two planned facilities in Phoenix that are expected to begin production in 2025 and 2028. Some of the funding will support building the third facility.
“This is the largest foreign direct investment in a greenfield project in the history of the United States,” U.S. President Joe Biden said in a statement. “The first of TSMC’s three facilities is on track to fully open early next year, which means that for the first time in decades an America manufacturing plant will be producing the leading-edge chips used in our most advanced technologies – from our smartphones, to autonomous vehicles, to the data centers powering artificial intelligence.”
A Disney executive said the "operational complexity" of selling its TV networks would offset the benefits
Disney (DIS) has no plans to sell its traditional broadcast and cable networks, setting itself apart from competitors who are actively considering shedding their linear TV assets.
Disney CFO Hugh Johnston told CNBC on Thursday that the company has looked at the math and believes a sale of its linear TV assets would be too operationally complex to justify the expected benefits.
“You can do spreadsheet math to justify just about anything, but when you look at operationally what it takes to do that, we came to the conclusion pretty quickly: This is actually a good integrated portfolio, and the cost of doing it is probably more than the benefit,” Johnston said.
Is it a matter of fear of moving to a different strategy or they've got a point? Anyways they're a multi billion dollar company and I'm not 🤣
Scientists at CERN ran the large hadron collider to detect top quarks, the heaviest known fundamental particles in the universe, and it worked.
In 1995, a team of scientists at the Fermi National Accelerator Laboratory (Fermilab) discovered the heaviest fundamental particle known to mankind called the top quark. What was intriguing about these top quarks was that despite their large mass, they decay almost instantly, making it hard to study directly.
The ATLAS collaboration, a group of researchers working with the ATLAS detector at the Large Hadron Collider (LHC), has detected top quarks in a collision between lead ions. Generally, top quarks are studied in proton-proton collisions, but this is the first time they have been observed in a heavy-ion collision.
“This observation represents a significant step forward in heavy-ion collision physics, paving the way for new measurements of the quark-gluon plasma (QGP) that is created in these collisions and delivering fresh insights into the nature of the strong force that binds protons, neutrons, and other composite particles together,” the ATLAS collaboration noted.
Probing QGP and top quark detection
It is believed that large amounts of quark-gluon plasma (QGP) were formed soon after the Big Bang, and this plasma provided the conditions necessary for the formation of protons, neutrons, and various other fundamental particles.
Understanding QGP at length can help scientists understand the factors that contributed to the origin of all matter as we know it. However, when QGP is observed using heavy-ion collisions, it appears with a lifetime of 10-23 seconds — making it impossible to study.
AI developer ElevenLabs partnered with the Garcia Estate to recreate the rocker’s voice for its ElevenReading voice app.
Jerry Garcia may be gone, but he’s not forgotten: A new app will now read your email or any other text in his voice.
In a deal with the Garcia Estate, AI developer ElevenLabs recently added the legendary Grateful Dead frontman to its “Iconic Voice Project” lineup.
#jerrygarcia #gratefuldead #music #ai #technology #levenlabs
Part e-reader, part audiobook streaming platform, the ElevenReader mobile app was designed as a way for people to listen to books, articles, PDFs, and even emails in the voice of a historical figure.
Other notable voices in the app include “Rebel Without a Cause” star James Dean, “The Wizard of Oz” star Judy Garland, “True Grit” star John Wayne, and “Smokey and the Bandit” star Burt Reynolds.
Deadheads were impressed with the likeness of Jerry’s voice double. “I’ve heard a snippet of AI Jerry, and it does sound like him,” author and historian Dennis McNally told Decrypt.
“I can’t imagine what he’d say about it,” he added. “This is the 21st century, and he didn’t live to see it, darn it.” Garcia died in 1995 at the age of 53.
Now with this technology imagine we being able to bring back the voice of Micheal Jackson and ChatGPT makes a killer lyrics and AI music makes make the song 😱 mind blowing
The project uses a new 3D printing method called dual-wavelength, two-photon polymerization (DW-2PP). This technique uses two light sources to print different materials with high precision.
Lawrence Livermore National Laboratory (LLNL) at the National Ignition Facility (NIF) is exploring the use of advanced 3D printing to mass-produce fuel capsules for fusion energy power plants.
It could be a significant advance in the field of fusion energy, which is considered the “Holy Grail” of clean and abundant power.
While the ignition experiment, which the Lab has already achieved in 2022, was a major breakthrough, producing fusion energy on a commercial scale presents significant challenges.
One of the biggest hurdles is the production of fuel capsules required for the process. These capsules hold the deuterium and tritium fuel used in fusion reactions.
These capsules, which must be nearly perfectly spherical, currently take months to manufacture. For reference, a viable power plant would require nearly a million of these capsules per day. Moreover, these capsules must be manufactured with extreme accuracy.
“The need for perfection is such that, if a NIF capsule were enlarged to the size of the Earth, an imperfection higher than the Hollywood sign in Los Angeles would be disqualifying,” highlighted the Lab in a press release.
Russell Okung acquired 240 BTC for half his $13 million salary in 2020. Four years later, the value of his investment is over $21 million.
In 2020, former Carolina Panthers offensive tackle Russell Okung made headlines by negotiating to have half of his $13 million NFL contract for the 2020 season paid out in Bitcoin.
Today, that move has paid off, with the value of Okung’s BTC now estimated at $21 million, thanks to the recent surge in the asset’s prices.
$21 Million Payout
While he was not paid in cryptocurrency, the Panthers converted $6.5 million of his salary through Strike, a financial service from Zap that facilitated the purchase of Bitcoin with his funds. This move made Okung one of the first high-profile professional athletes to receive a substantial portion of his income in Bitcoin.
The digital asset was trading at around $27,000 at the time of his investment. With his $6.5 million, Okung acquired approximately 240 BTC, a move viewed as a significant financial risk since the cryptocurrency was still in a growth phase and was known for its volatile price swings.
In late 2022, the cryptocurrency experienced one of its more severe downturns, dropping to around $17,000. This means the 36-year-old’s investment temporarily lost significant value, decreasing to around $4.08 million. Nevertheless, Okung opted to hold onto his stash.
Following the U.S. presidential election in November 2024, BTC prices surged dramatically, reaching an all-time high of over $83,000. As of this week, bitcoin is around $88,000, raising the value of Okung’s initial 240 BTC to approximately $21.36 million.
This increase effectively boosts the total worth of his Panthers contract to around $27.8 million, meaning Okung’s decision to invest half his salary in Bitcoin has led to an additional $14.8 million in value.
Bitcoin price is near $100,000 and this happened after Trump's win. What I'm seeing is the Federal Reserve softening
its crypto stance.
We expect Big changes for Bitcoin and crypto regulations
Bitcoin's recent spike to a new all-time high price has lifted the entire market—and mobile crypto apps on iOS and Android are surging too.
Mobile cryptocurrency apps have jumped towards the top of App Store rankings amid a surge for Bitcoin and other crypto prices, resulting in a major increase in the overall market cap for all coins—surpassing $3 trillion earlier this week.
Coinbase leads the charge, jumping from #26 on Election Day to #1 on Friday in the Free Finance category of Apple’s App Store for iOS devices. The ranking boost aligns with significant spikes in trading volume on the exchange, which surpassed more than $12 billion on Nov. 12, marking the highest volume day recorded this year according to data from CoinGecko.
Other major crypto apps like Robinhood and Crypto.com followed in Coinbase’s footsteps, making leaps inside the top 10 in the same category.
Historically, rising crypto prices correlate with an increased popularity of major exchanges and their respective apps, which are more accessible to mainstream retail buyers than decentralized platforms targeted towards crypto experts.
But this time, it’s not just the top crypto brands making their presence known in the App Store rankings. Moonshot, a mobile meme coin trading platform, has jumped 388 spots—from outside the top 400 apps—to #84 in the Free Finance category during the same timeframe.
With the way crypto is rising in value people are starting to feel lucky enough to try these apps and at least hold whatever they earn till a rise again. I'm looking into more crypto apps to earn from little activities but nothing beats Leo
ByteDance, TikTok's owner, values itself at $300 billion today. We know it's U.S. challenges, but Trump says he’ll quote "save TikTok”
It's going to be a hell of a task with Congress but he might do it
Yeah. The Chinese connection is bogus. TikTok has servers in the US.
It is an anti freedom of speech thing. That is what I believe is at the root of it all. TikTok will not give the data to the US government and obey what they say.
oh I see why there's a problem with it now, they won't comply with data transparency
The Pentagon has invested hundreds of millions into AI. The number is expected to grow in 2025 as it aims to reshape its defense strategy.
War is more profitable than peace, and AI developers are eager to capitalize by offering the U.S. Department of Defense various generative AI tools for the battlefields of the future.
The latest evidence of this trend came last week when Claude AI developer Anthropic announced that it was partnering with military contractor Palantir and Amazon Web Services (AWS) to provide U.S. intelligence and the Pentagon access to Claude 3 and 3.5.
Anthropic said Claude will give U.S. defense and intelligence agencies powerful tools for rapid data processing and analysis, allowing the military to perform faster operations.
Experts say these partnerships allow the Department of Defense to quickly adopt advanced AI technologies without needing to develop them internally.
"As with many other technologies, the commercial marketplace always moves faster and integrates more rapidly than the government can," retired U.S. Navy Rear Admiral Chris Becker told Decrypt in an interview. "If you look at how SpaceX went from an idea to implementing a launch and recovery of a booster at sea, the government might still be considering initial design reviews in that same period."
You know some bro, the moment a read the statement "War is profitable than peace" I've made a conclusion, War is not going to end but evolve into something else 💔 it breaks my heart
Part 1/5:
In the world of offensive security, a new and alarming bug has emerged, affecting the Apple M1, M2, and M3 chipsets. This bug, known as "Go Fetch," is a vulnerability in the silicon design of the CPU, making it virtually unpatchable without physically replacing the affected hardware.
The bug is a local vulnerability, meaning an attacker must already have access to the target computer to exploit it. However, the implications of this bug are far-reaching, as it delves into the realm of side-channel cache-based memory attacks, a field that has been an active area of research and exploitation for the past decade.
[...]
Part 2/5:
At the heart of this vulnerability lies the concept of side-channel attacks. These attacks exploit the implementation details of a system, rather than the intended functionality. A classic example is a password-checking algorithm that takes longer to process an incorrect password than a correct one. This timing difference can be used to deduce the correct password, even though the algorithm itself is functioning as intended.
In the context of computers, this side-channel attack principle applies to the way they utilize cache memory. Cache is a high-speed memory layer that sits between the CPU and the slower main memory (RAM). When a process accesses a memory address, it may first find the data in the cache, resulting in a "cache hit," or it may need to fetch the data from main memory, resulting in a "cache miss." The time it takes to access the data can reveal information about the memory access patterns of other processes, leading to potential security vulnerabilities
[...]
Part 3/5:
The Go Fetch vulnerability exploits a specific feature of the Apple silicon's data memory dependent prefetchers (DMPs). These prefetchers are designed to anticipate future memory accesses and preload data into the cache, improving performance. However, the researchers found that the DMPs do not properly validate whether a memory address is a valid pointer before attempting to prefetch it.
This flaw allows an attacker process to inject arbitrary memory addresses into the DMP, causing it to fetch and load those addresses into the cache. By carefully timing the cache access patterns, the attacker can then infer information about the memory contents of other processes, including sensitive cryptographic keys used for authentication and encryption.
[...]
Part 4/5:
The implications of the Go Fetch vulnerability are significant. An attacker with access to the target system can potentially read the RSA or AES keys used by other processes, compromising the security of cryptographic operations. This attack can be particularly devastating in scenarios where sensitive data is being processed, such as in secure authentication or financial transactions.
The researchers behind the Go Fetch vulnerability have released a white paper, a proof-of-concept, and detailed information about the bug, shedding light on the intricate workings of CPU architecture and the ongoing battle between hardware designers and security researchers.
[...]
Part 5/5:
The discovery of the Go Fetch vulnerability highlights the ever-evolving landscape of computer security. As hardware and software become increasingly complex, new attack vectors emerge, challenging the ability to create truly secure systems. This bug serves as a reminder that even the most fundamental components of our computing devices, such as the CPU, can harbor vulnerabilities that may be difficult, if not impossible, to patch.
The research behind the Go Fetch vulnerability showcases the depth of knowledge and dedication of security researchers in the field of offensive security. By understanding the inner workings of computer systems, they uncover vulnerabilities that can have far-reaching consequences, ultimately driving the development of more secure hardware and software designs.
#technology #ai #newsonleo !summarize
This video has already been summarized: https://inleo.io/threads/view/taskmaster4450le/re-taskmaster4450le-2albm6jfq
People are saying that Bitcoin entered a parabolic phase. What exactly does that term mean you might ask. BTC entering a parabolic phase basically means its price could skyrocket for weeks.
An analyst, Rekt Capital, claims that we’re only getting started with Bitcoin's rise in value. I shared a couple of threads about Bitcoin's rise and the response I got from those threads made me realize just how bullish everyone is.
#technology #ai #newsonleo !summarize
Part 1/4:
Large language models (LLMs) have made remarkable progress in recent years, excelling at tasks that align with their training data. However, they often struggle with novel problems requiring complex reasoning, planning, or string manipulation that differ significantly from their pre-training data.
Researchers have explored various techniques to improve LLM performance on such complex and novel tasks. One promising approach is called "test time training," which involves temporarily updating the model's parameters during inference based on the test input. This method differs from standard fine-tuning, as it operates in an extremely low-data regime, allowing for efficient customization of pre-trained neural networks.
[...]
Part 2/4:
The researchers identified three crucial components for successful test time training:
Initial Fine-Tuning on Similar Tasks: The model must be capable of performing well on related tasks before the test time training can be effective.
Auxiliary Task Format and Augmentations: The researchers generate diverse training data by applying geometric transformations to the test input, creating variations that the model can learn from during the test time fine-tuning process.
Per-Instance Training: The model updates its parameters for each test input, effectively creating a specialized prediction model for each instance.
[...]
Part 3/4:
The researchers applied this test time training approach to an 8-billion-parameter language model and achieved a 53% accuracy on the ARC public validation set, improving the state-of-the-art by nearly 25%. The ARC benchmark is a challenging test of artificial general intelligence (AGI), where the average human score is around 60%.
The researchers' findings challenge the assumption that symbolic components are strictly necessary for solving complex reasoning tasks. Instead, they suggest that the critical factor may be the allocation of proper computational resources during test time, regardless of whether these resources are deployed through symbolic or neural mechanisms.
[...]
Part 4/4:
This research highlights the potential of test time training as a powerful technique for scaling AI systems and reaching AGI. By leveraging the existing data and models more effectively, rather than solely relying on synthetic data or increased training time, the researchers have demonstrated a promising path forward in the quest for artificial general intelligence.
#technology #ai #agents !summarize
Hi, @mightpossibly,
This post has been voted on by @darkcloaks because you are an active member of the Darkcloaks gaming community.
Get started with Darkcloaks today, and follow us on Inleo for the latest updates.
Part 3/3:
By integrating Crew AI with LangTrace, you can gain valuable visibility into the inner workings of your AI agent applications. This allows you to optimize performance, monitor costs, and ensure the quality of your AI-powered solutions. LangTrace's open-source nature and easy integration make it a powerful tool for building robust and transparent AI applications.
An update on China’s Chang’e 8 lunar mission reveals that the spacecraft will land on the moon’s south pole and test resource utilization technologies.
China’s Chang’e 8 mission will introduce new technologies and scientific experiments for lunar exploration. According to a post shared on Chinese social media, Wang Qiong, chief designer of the mission, recently presented an update on the project in Beijing.
The presentation highlighted a four-wheeled spacecraft with a humanoid-shaped upper section. Although the purpose and significance of its design remain unclear.
Scheduled for 2028, the mission aims to land near the moon’s south pole to test resource utilization technologies, such as 3D-printing lunar regolith into bricks, and conduct ecosystem experiments.
Designed with advanced scientific tools
Slides from the post, uploaded onto the Chinese social media platform Weibo, reveal details of the Chang’e 8 spacecraft. The four-legged lander, based on previous successful Chang’e missions, will carry scientific instruments like cameras, telescopes, and a seismometer. It will also include a crane to deploy payloads and spacecraft on the lunar surface.
The lander will carry a six-wheeled rover, similar to the Yutu rovers from earlier Chang’e missions. The rover will be equipped with a panoramic camera, lunar penetrating radar, an infrared spectrometer, and a payload for sample analysis and storage.
The Chang’e 8, along with the 2026 Chang’e 7 mission, serves as a precursor to China’s planned International Lunar Research Station, set for construction in the 2030s with support from Russia and other partners.
Roadmap to dominate market
While China has not publicly revealed any other humanoid robots for space exploration, the government is focused on mass-producing humanoid robots by 2025 and aims to dominate the emerging market by 2027.
In January, Beijing launched a $1.4 billion state-backed robotics fund, while Shanghai announced plans in July to create a $1.4 billion humanoid industry fund. The effort is also backed by Chinese President Xi Jinping’s policy of developing “new productive forces” in technology.
The country is already utilizing robots, as shown at the recent Hangzhou Marathon, where two robots, Go2 and B2, served as official cheerleaders and pacemakers. The four-legged robots ran alongside participants, with Go2 playing music, offering encouragement and safety tips, shaking hands, and performing tricks like backflips and handstands.
Humans have dreamed of creating thinking machines from ancient times. Folklore and historical attempts to build programmable devices reflect this long-standing ambition and fiction abounds with the possibilities of intelligent machines, imagining their benefits and dangers. It's no wonder that when OpenAI released the first version of GPT (Generative Pretrained Transformer), it quickly gained widespread attention, marking a significant step toward realizing this ancient dream.
GPT-3 was a landmark moment in AI due to its unprecedented size, featuring 175 billion parameters, which enabled it to perform a wide range of natural language tasks without extensive fine-tuning. This model was trained using big data, allowing it to generate human-like text and engage in conversations. It also had the ability to perform few-shot learning, significantly improving its versatility and demonstrated usefulness in commercial AI applications such as chatbots and virtual assistants.
Today, AI is increasingly becoming embedded into many aspects of daily life, from social media to work processes and as the technology improves, its influence will continue to grow. To understand the directions the technology can take, it helps to understand how we got here. Here is a history of major developments in AI:
1959
Arthur Samuel pioneers the concept of machine learning by developing a computer program that improves its performance at checkers over time. Samuel demonstrates that a computer can be programmed to follow predefined rules and "learn" from experience, eventually playing better than the programmer. His work marks a major step toward teaching machines to improve through experience, coining the term "machine learning" in the process.
Pre-20th century
1726
Jonathan Swift's fantastic novel “Gulliver's Travels” introduces the idea of The Engine, a large mechanical contraption used to assist scholars in generating new ideas, sentences and books.
Scholars turn handles on the machine, which rotates wooden blocks inscribed with words. The machine is said to create new ideas and philosophical treatises by combining words in different arrangements:
"Every one knew how laborious the usual method is of attaining to arts and sciences; whereas by his contrivance the most ignorant person, at a reasonable charge and with a little bodily labour, might write books in philosophy, poetry, politics, laws, mathematics and theology, without the least assistance from genius or study."
- Jonathan Swift's Gulliver's Travels (1726)
Swift's satire anticipates the concept of algorithmic text generation, which is now a reality with modern AI. AI models can produce coherent text by combining words and ideas based on underlying algorithms, similar to what Swift's fictional Engine is meant to do.
1914
Spanish engineer Leonardo Torres y Quevedo demonstrates the first chess-playing machine, El Ajedrecista at the Exposition Universelle in Paris. It used electromagnets and was fully automated. El Ajedrecista automatically played a simple chess endgame of king and rook versus king. The machine required no human intervention once it was set up—it autonomously made legal chess moves and if the human opponent made an illegal move, the machine would signal the error. If the machine was placed in a winning position, it was able to checkmate the human opponent reliably.
1921
A play named "Rossum's Universal Robots" (R.U.R) opens in London. The play by Karel Čapek is the first time the word "robot" is used in English. In Czech, the word "robota" is associated with compulsory or forced work performed by peasants in a feudal system. The term "robot" quickly gained international recognition after the play's success and became the standard term for mechanical or artificial beings created to perform tasks. Though Čapek's robots are organic, the word came to be associated with mechanical, humanoid machines designed to perform monotonous, unskilled labor.
1939
John Vincent Atanasoff, a professor of physics and mathematics at Iowa State College, and his graduate student Clifford Berry, create the Atanasoff-Berry Computer (ABC) with a grant of USD 650 at Iowa State University. The ABC computer is considered one of the earliest digital electronic computers and a milestone in the field of American computer science.
While the ABC is never fully operational or widely used, it introduced several key concepts that would become foundational in the development of modern computing.
Unlike previous computing devices that relied on decimal systems, the ABC used binary (1s and 0s) to represent data, which became the standard for computers thereafter. ABC is also one of the first computers to use electronic circuits for computation instead of mechanical or electromechanical systems, allowing for faster and more reliable calculations. The ABC separated data storage (memory) from the processing unit (logic operations), a principle still followed in modern computer architecture. It used capacitors to store data and could handle up to 30 simultaneous equations.
ABC employed around 300 vacuum tubes for its logic operations, which made it much faster than earlier mechanical calculators. Vacuum tubes, though bulky and prone to failure, are a key development in electronic computing. The ABC weighed over 700 pounds and could solve up to 29 simultaneous linear equations.
1943
Warren S. McCulloch and Walter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity" in the Bulletin of Mathematical Biophysics.1 It is one of the seminal works in the history of both neuroscience and AI. The paper lays the foundation for the idea that the brain can be understood as a computational system and it introduces the concept of artificial neural networks, now a key technology in modern AI. This idea inspires computer systems that simulate brain-like functions and processes, particularly through neural networks and deep learning.
1950
British mathematician Alan Turing's landmark paper "Computing Machinery and Intelligence" is published in Mind.2 This paper is a foundational text in AI and addresses the question, "Can machines think?" Turing's approach established a foundation for future discussions on the nature of thinking machines and how their intelligence might be measured via the "imitation game,” now known as the Turing Test. Turing introduced a thought experiment to avoid directly answering the question "Can machines think?" Instead, he rephrased the problem into a more specific, operational form: Can a machine exhibit intelligent behavior indistinguishable from that of a human?
The Turing Test has become a central concept in AI, serving as one way to measure machine intelligence by assessing a machine's ability to convincingly mimic human conversation and behavior.
1951
Marvin Minsky and Dean Edmunds build the first artificial neural network. The Stochastic Neural Analog Reinforcement Calculator (SNARC) is an early attempt to model learning processes in the human brain, specifically through reinforcement learning.
SNARC is designed to simulate the behavior of a rat navigating a maze. The idea is to have the machine mimic the way animals learn through rewards and punishment—adjusting its behavior over time based on feedback. It is an analog computer using a network of 3000 vacuum tubes alongside synaptic weights to simulate 40 neuron-like units.
1952
Allen Newell, a mathematician and computer scientist, and Herbert A. Simon, a political scientist, develop influential programs such as the Logic Theorist and General Problem Solver, which are among the first to mimic human problem-solving abilities using computational methods.
1955
The term "artificial intelligence" is first coined in a workshop proposal titled "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,"3 submitted by John McCarthy of Dartmouth College, Marvin Minsky of Harvard University, Nathaniel Rochester from IBM and Claude Shannon from Bell Telephone Laboratories.
The workshop, which took place a year later, in July and August 1956, is generally considered the official birthdate of the burgeoning field of AI.
1957
Frank Rosenblatt, a psychologist and computer scientist, develops the Perceptron, an early artificial neural network that enables pattern recognition based on a two-layer computer learning network. The Perceptron introduces the concept of a binary classifier that can learn from data by adjusting the weights of its inputs through learning algorithms. While limited to solving linearly separable problems, it laid the foundation for future neural networks and machine learning developments.
1958
John McCarthy develops programming language Lisp4, which stands for LISt Processing. Lisp is developed out of McCarthy's work on formalizing algorithms and mathematical logic, particularly influenced by his desire to create a programming language that can handle symbolic information. Lisp soon becomes the most popular programming language used in AI research.
Oliver Selfridge publishes his paper "Pandemonium: A paradigm for learning."5 His pandemonium model proposed a system in which various "demons" (processing units) work together to recognize patterns. The demons compete to identify features in data that has not been preprogrammed, simulating unsupervised learning. Selfridge's model is an early contribution to pattern recognition, influencing future developments in machine vision and AI.
John McCarthy introduces the concept of the Advice Taker in his paper "Programs with Common Sense."6 This program aims to solve problems by manipulating sentences in formal logic, laying the groundwork for reasoning in AI. McCarthy envisions a system that can understand instructions, reason with common-sense knowledge and learn from experience, with the long-term goal of developing AI that can adapt and learn as effectively as humans. This concept helps shape early research in knowledge representation and automated reasoning.
1965
Philosopher Hubert Dreyfus publishes "Alchemy and Artificial Intelligence,"7 arguing that the human mind operates fundamentally differently from computers. He predicts limits to AI progress due to the challenges of replicating human intuition and understanding. His critique is influential in sparking debates about AI's philosophical and practical limits.
I.J. Good writes "Speculations Concerning the First Ultraintelligent Machine,"8 famously asserting that once an ultraintelligent machine is created, it can design even more intelligent systems, making it humanity's last invention—provided it remains controllable. His ideas prefigure modern discussions on AI superintelligence and its risks.
Joseph Weizenbaum develops ELIZA,9 a program that mimics human conversation by responding to typed input in natural language. Although Weizenbaum intends to show the superficiality of human-computer communication, he is surprised by how many users attributed human-like emotions to the program, raising ethical questions about AI and human interaction.
Edward Feigenbaum, Bruce Buchanan, Joshua Lederberg and Carl Djerassi developed DENDRAL at Stanford University.10 It is the first expert system to automate the decision-making process of organic chemists by simulating hypothesis formation. DENDRAL's success marks an advance in AI, demonstrating how systems can perform specialized tasks as well as or better than human experts.
1966
Developed at SRI in the late 1960s, Shakey is the first mobile robot capable of reasoning about its own actions, combining perception, planning and problem-solving.11 In a 1970 Life magazine article, Marvin Minsky predicts that within three to eight years, AI would achieve the general intelligence of an average human. Shakey's achievements mark a milestone in robotics and AI, though Minsky's ambitious timeline proves overly optimistic.
1969
Arthur Bryson and Yu-Chi Ho introduce backpropagation, a method for optimizing multi-stage dynamic systems. While originally developed for control systems, this algorithm becomes crucial for training multilayer neural networks. Backpropagation only gained prominence in the 2000s and 2010s with advances in computing power, enabling the rise of deep learning.
Marvin Minsky and Seymour Papert publish Perceptrons: An Introduction to Computational Geometry,12 which critically analyzed the limitations of single-layer neural networks. Their work is often blamed for reducing interest in neural networks. In the 1988 edition, they argue that progress had already stalled due to a lack of theoretical understanding despite numerous experiments with perceptrons by the mid-1960s.
1970
Terry Winograd creates SHRDLU, a groundbreaking natural language understanding program.13 SHRDLU can interact with users in plain English to manipulate objects in a virtual block world, demonstrating the potential for computers to understand and respond to complex instructions. It is an early achievement in natural language processing, though its success is limited to specific, highly structured environments. SHRDLU's capabilities highlight both the promise and the challenges of achieving broader AI language understanding.
1972
Developed at Stanford University, MYCIN is one of the first expert systems created to assist doctors in diagnosing bacterial infections and recommending antibiotic treatments.14 MYCIN uses a rule-based approach to simulate the decision-making process of human experts and creates a platform for the development of medical AI systems. However, due to ethical and legal concerns, it is never implemented in clinical practice.
1973
James Lighthill presents a critical report to the British Science Research Council on the progress of AI research, concluding that AI has failed to deliver on its early promises.15 He argues that the field has not produced significant breakthroughs, leading to a drastic reduction in government funding for AI in the UK. This report contributed to the onset of the first AI winter16, a period of diminished interest and investment in AI research.
1980
WABOT-217, a humanoid robot developed at Waseda University in Japan, is built starting in 1980 and completed around 1984. It followed WABOT-1, which had been built in 1973. While WABOT-1 focused on basic mobility and communication, WABOT-2 is more specialized, designed specifically as a musician robot. It can read musical scores with its camera "eyes," converse with humans, play music on an electronic organ and even accompany a human singer. This project represents a meaningful step toward the development of humanoid robots and AI capable of performing complex, human-like tasks such as artistic expression.
1982
Japan launched the Fifth Generation Computer Systems Project (FGCS) with the goal of developing computers that could handle logical reasoning and problem-solving, pushing AI research forward. This ambitious project aimed to build machines capable of performing tasks such as natural language processing and expert systems. Though it was halted in 1992, the FGCS project and its findings contributed greatly to the development of the concurrent logic programming field.
1984
At the annual meeting of the Association for the Advancement of Artificial Intelligence (AAAI), Roger Schank and Marvin Minsky caution about an impending "AI Winter," predicting that inflated expectations surrounding AI will soon lead to a collapse in investment and research, similar to the funding reduction in the mid-1970s. Their prediction came true within three years as interest in AI dwindled due to unmet promises, resulting in decreased funding and a slowdown in progress. This period became known as the second AI Winter.
Schank and Minsky's warning highlights the cyclical nature of AI hype, where bursts of optimism are followed by disillusionment when the technology failed to meet investors' and the public's expectations.
1986
David Rumelhart, Geoffrey Hinton and Ronald Williams publish the seminal paper "Learning representations by back-propagating errors," in which they described the backpropagation algorithm.18 This method allows neural networks to adjust their internal weights by "back-propagating" the error through the network, improving the ability of multilayer networks to learn complex patterns. The backpropagation algorithm becomes a foundation for modern deep learning, sparking renewed interest in neural networks and overcoming some limitations highlighted in earlier AI research. This discovery builds on the 1969 work of Arthur Bryson and Yu-Chi Ho by applying the backpropagation algorithm specifically to neural networks, overcoming previous limitations in training multilayer networks.
This breakthrough makes artificial neural networks viable for practical applications and opened the door for the deep learning revolution of the 2000s and 2010s.
1987
During his Educom keynote speech, Apple CEO John Sculley presents the Knowledge Navigator video, which imagines a future where digital smart agents help users access vast amounts of information over networked systems.19 This visionary concept depicts a professor interacting with a knowledgeable, voice-activated assistant who can retrieve data, answer questions and display information from what we now recognize as the internet. The video foresaw many elements of modern technologies such as AI assistants, networked knowledge databases and our interconnected digital world.
1988
Judea Pearl publishes Probabilistic Reasoning in Intelligent Systems, revolutionizing how AI processes information under uncertainty.20 This work introduces Bayesian networks, a formalism for representing complex probability models and the algorithms for performing inference within them. Pearl's methods allowed AI systems to make reasoned decisions in uncertain environments, influencing fields far beyond AI, including engineering and the natural sciences. His contributions are recognized with his 2011 Turing Award, which cited his role in creating the "representational and computational foundation" for modern probabilistic reasoning in AI.21
Rollo Carpenter developed Jabberwacky22, an early chatbot designed to simulate human-like conversations that are interesting, entertaining and humorous. Unlike rule-based systems, Jabberwacky learns from human interactions to generate more natural dialogue, paving the way for later conversational AI models. This chatbot is one of the first attempts to create AI that mimics spontaneous, everyday human conversation through continuous learning from its interactions with users.
Researchers from the IBM T.J. Watson Research Center publish "A Statistical Approach to Language Translation," marking a pivotal shift from rule-based to probabilistic methods in machine translation.23 This approach, exemplified by IBM's Candide project24, uses 2.2 million English-French sentence pairs, primarily sourced from the Canadian Parliament's proceedings. This new methodology emphasizes learning from statistical patterns in data rather than attempting to comprehend or "understand" the languages, reflecting the broader trend toward machine learning that relies on analyzing known examples. This probabilistic model paved the way for many future advancements in natural language processing and machine translation.
Marvin Minsky and Seymour Papert release an expanded edition of their 1969 book Perceptrons, a seminal critique of early neural networks. In the new prologue, titled "A View from 1988," they reflected on the slow progress in the field of AI, noting that many researchers continued to repeat mistakes from the past due to unfamiliarity with earlier challenges.12 They highlight the need for deeper theoretical understanding, which is lacking in earlier neural network research. They underscore their original criticisms while acknowledging emerging approaches that would later lead to modern deep learning advancements.
1989
Yann LeCun and a team of researchers at AT&T Bell Labs achieve a breakthrough by successfully applying the backpropagation algorithm to a multilayer neural network to recognize handwritten ZIP code images.24 This is one of the first practical applications of deep learning using convolutional neural networks. Despite the limited hardware of the time, it takes about three days to train the network, a meaningful improvement over earlier attempts. The system's success in handwritten digit recognition, a key task for automating postal services, demonstrates the potential of neural networks for image recognition tasks and laid the foundation for the explosive growth of deep learning in the following decades.
1993
Science fiction author and mathematician Vernor Vinge publishes the essay "The Coming Technological Singularity," in which he predicts that superhuman intelligence will be created within the next 30 years, fundamentally transforming human civilization.25 Vinge argues that technological advances, particularly in AI, will lead to an intelligence explosion—machines surpassing human intelligence—and the end of the human era as we know it. His essay is instrumental in popularizing the concept of the "technological singularity," a moment when AI would surpass human control, sparking debate in AI, ethics and futurism communities.
This prediction continues to influence discussions about the potential impacts of AI and superintelligence, particularly the existential risks and the ethical considerations of creating machines with intelligence far beyond human capability.
1995
Richard Wallace develops the chatbot A.L.I.C.E.26 (Artificial Linguistic Internet Computer Entity), building on the foundation laid by Joseph Weizenbaum's ELIZA program. Unlike ELIZA, which relied on scripted responses to simulate conversation, A.L.I.C.E. leveraged the newly emerging World Wide Web to collect and process vast amounts of natural language data, enabling it to engage in more complex and fluid conversations. A.L.I.C.E. uses a pattern-matching technique called AIML (Artificial Intelligence Markup Language) to parse and generate responses, making it more adaptable and scalable than its predecessors. Wallace's work sets the stage for further advancements in conversational AI, influencing modern virtual assistants and chatbots.
1997
Sepp Hochreiter and Jürgen Schmidhuber introduce Long Short-Term Memory (LSTM), a type of recurrent neural network (RNN) designed to overcome the limitations of traditional RNNs, particularly their inability to capture long-term dependencies in data effectively. LSTM networks are widely used in applications such as handwriting recognition, speech recognition, natural language processing and time series forecasting.
IBM's Deep Blue makes history by defeating reigning world chess champion Garry Kasparov in a six-game match.27 This is the first time a computer chess-playing program beat a world champion under standard chess tournament time controls. Deep Blue's victory demonstrated that computers can outperform humans in highly strategic games, long considered a hallmark of human intelligence. The machine's ability to calculate millions of moves per second, combined with advancements in game theory and heuristics, enable it to outmaneuver Kasparov, solidifying Deep Blue's place in AI history.
The event also sparked debates about the future relationship between human cognition and AI, influencing subsequent AI research in other fields such as natural language processing and autonomous systems.
1998
Dave Hampton and Caleb Chung create Furby, the first widely successful domestic robotic pet.28 Furby can respond to touch, sound and light and "learn" language over time, starting with its language, Furbish, but gradually "speaking" more English as it interacts with users. Its ability to mimic learning and engage with users makes it a precursor to more sophisticated social robots, blending robotics with entertainment for the first time in a consumer product.
Yann LeCun, Yoshua Bengio and their collaborators publish influential papers on neural networks' application to handwriting recognition.29 Their work focuses on using convolutional neural networks to optimize the backpropagation algorithm, making it more effective for training deep networks. By refining the backpropagation process and demonstrating the power of CNNs for image and pattern recognition, LeCun and Bengio's research set the stage for modern deep learning techniques used in a wide range of AI applications today.
2000
Cynthia Breazeal at MIT develop Kismet, a robot designed to interact with human beings through emotional and social cues.30 Kismet is equipped with cameras, microphones and expressive facial features, allowing it to perceive and respond to human emotions such as happiness, sadness and surprise. This development marks an advance in social robotics, exploring how robots can interact with humans more naturally.
2006
Geoffrey Hinton publishes "Learning Multiple Layers of Representation," which summarizes key breakthroughs in deep learning and outlines how multilayer neural networks can be trained more effectively.31 Hinton's work focuses on training networks with graduated connections to generate sensory data rather than simply classifying it. This approach represents a shift from traditional neural networks to what we now call deep learning, allowing machines to learn complex hierarchical representations of data.
2007
Fei-Fei Li and her team at Princeton University initiate the ImageNet project, creating one of the largest and most comprehensive databases of annotated images.32 ImageNet is designed to support the development of visual object recognition software by providing millions of labeled images across thousands of categories. The scale and quality of the dataset enables advancements in computer vision research, particularly in training deep learning models to recognize and classify objects in images.
2009
Rajat Raina, Anand Madhavan and Andrew Ng publish "Large-scale Deep Unsupervised Learning using Graphics Processors," arguing that graphics processing units (GPUs) can far outperform traditional multi-core CPUs for deep learning tasks.33 They demonstrate that GPUs' superior computational power can revolutionize the applicability of deep unsupervised learning methods, allowing researchers to train more extensive and complex models more efficiently. This work is instrumental in accelerating the adoption of GPUs in deep learning, leading to the breakthroughs in the 2010s that power modern AI applications in fields such as computer vision and natural language processing.
Computer scientists at Northwestern University's Intelligent Information Laboratory develop Stats Monkey, a program capable of automatically generating sports news stories without human intervention.34 Using game statistics, Stats Monkey can craft coherent narratives about baseball games, complete with recaps, player performances and analysis.
2011
IBM's Watson, an advanced natural language question-answering computer, makes headlines by competing on the game show Jeopardy! against two of the show's most successful champions, Ken Jennings and Brad Rutter, and defeating them.35 Watson's ability to process and interpret natural language and its vast knowledge base allow it to answer complex questions quickly and accurately. This victory highlights the advancements in AI's ability to understand and interact with human language on a sophisticated level.
Apple launches Siri, a virtual assistant integrated into the iOS operating system. Siri features a natural language user interface that allows users to interact with their devices through voice commands. Siri can perform tasks such as sending messages, setting reminders, providing recommendations and answering questions using machine learning to adapt to each user's preferences and voice patterns. This personalized, adaptive voice recognition system gives users an individualized experience and marks a leap in the usability and accessibility of AI-powered assistants for everyday consumers.
2012
Jeff Dean and Andrew Ng conduct an experiment using a massive neural network with 10 million unlabeled images sourced from YouTube videos.36 During the experiment, the network, without prior labeling, learns to recognize patterns in the data and "to our amusement," one neuron becomes particularly responsive to images of cats. This discovery is a demonstration of unsupervised learning—showing how deep neural networks can autonomously learn features from vast amounts of data.
Researchers from the University of Toronto, led by Geoffrey Hinton, design a convolutional neural network that achieves a breakthrough result in the ImageNet Large Scale Visual Recognition Challenge.37 Their CNN, known as AlexNet achieves a 16% error rate, a substantial improvement over the previous year's best result of 25%. This achievement marks a turning point for deep learning in computer vision, proving that CNNs can outperform traditional image classification methods when trained on large datasets.
2017
Researchers at the Facebook Artificial Intelligence Research (FAIR) lab train two chatbots to negotiate with each other. While the chatbots are programmed to communicate in English, during their conversations, they began to diverge from the structured human language and create their own shorthand language to communicate more efficiently.40 This development is unexpected, as the bots optimize their communication without human intervention. The experiment is halted to keep the bots within human-understandable language, but the occurrence highlights the potential of AI systems to evolve autonomously and unpredictably.
2020
OpenAI introduces GPT-3, a language model with 175 billion parameters, making it one of the largest and most sophisticated AI models to date. GPT-3 demonstrates the ability to generate human-like text, engage in conversations, write code, translate languages and generate creative writing based on natural language prompts. As one of the earliest examples of a large language model (LLM), GPTs massive size and scale enabled it to perform a wide variety of language tasks with little to no task-specific training. This example demonstrated the potential of AI to understand and produce highly coherent language.
DeepMind's AlphaFold 2 makes a breakthrough in biology by accurately predicting the 3D structures of proteins from their amino acid sequences. This achievement solves a problem that stumped scientists for decades—understanding how proteins fold into their unique three-dimensional shapes. AlphaFold 2's high accuracy in protein structure prediction has implications for disease research and drug development, offering new ways to understand the molecular mechanisms behind illnesses and to design novel therapeutics more efficiently.
2021
MUM (Multitask Unified Model), developed by Google, is a powerful AI model designed to improve the search experience by understanding and generating language across75 languages. MUM can multitask, analyzing text, images and videos simultaneously, allowing it to tackle more complex and nuanced search queries.41 Unlike traditional models, MUM can handle multimodal inputs and provide comprehensive, context-rich answers to sophisticated questions involving multiple sources of information.
Tesla launches the Full Self-Driving (FSD) Beta, an advanced driver assistance system aimed at achieving fully autonomous driving. The FSD Beta leverages deep learning and neural networks to navigate complex driving scenarios such as city streets in real-time, highways and intersections. It allows Tesla vehicles to steer, accelerate and brake autonomously under specific conditions while requiring driver supervision. Tesla's FSD Beta marks a step toward the company's goal of fully autonomous vehicles, though regulatory challenges and safety concerns remain in the path toward achieving widespread deployment of autonomous driving technology.
2021–2023
OpenAI launches DALL-E, followed by DALL-E 2 and DALL-E 3, generative AI models capable of generating highly detailed images from textual descriptions. These models use advanced deep learning and transformer architecture to create complex, realistic and artistic images based on user input. DALL-E 2 and 3 expand the use of AI in visual content creation, allowing users to turn ideas into imagery without traditional graphic design skills.
2024
February
Google launches Gemini 1.5 in limited beta, an advanced language model capable of handling context lengths of up to 1 million tokens.42 The model can process and understand vast amounts of information in a single prompt, improving its ability to maintain context in complex conversations and tasks over extended text. Gemini 1.5 represents a notable leap in natural language processing by providing enhanced memory capabilities and contextual understanding over long inputs.
OpenAI publicly announces Sora, a text-to-video model capable of generating videos up to one minute long from textual descriptions.43 This innovation expands the use of AI-generated content beyond static images, enabling users to create dynamic, detailed video clips based on prompts. Sora is expected to open new possibilities in video content creation.
StabilityAI announces Stable Diffusion 3, its latest text-to-image model. Like Sora, Stable Diffusion 3 uses a similar architecture for generating detailed and creative content from text prompts.44
May
Google DeepMind unveils a new extension of AlphaFold that helps identify cancer and genetic diseases, offering a powerful tool for genetic diagnostics and personalized medicine.45
IBM introduces the Granite™ family of generative AI models as part of its watsonx™ platform. Ranging 3–34 billion parameters, Granite models are designed for tasks such as code generation, time-series forecasting and document processing. Open-sourced and available under the Apache 2.0 license, these models are lightweight, cost-effective and customizable, making them ideal for a wide range of business applications.
June
Apple announces Apple Intelligence, an integration of ChatGPT into new iPhones and Siri.46 This integration allows Siri to perform more complex tasks, hold more natural conversations and better understand and execute nuanced commands.
September
NotebookLM introduces DeepDive, a new multimodal AI capable of transforming source materials into engaging audio presentations structured as a podcast.47 DeepDive's ability to analyze and summarize information from different formats, including webpages, text, audio and video, opens new opportunities for creating personalized and automated content across various platforms. This capability makes it a versatile tool for media production and education.
Current AI trends point to new evolutions of generative AI operating on smaller, more efficient foundation models and the rise of agentic AI, where specific AI models work together to complete user requests faster. Further into the future, autonomous vehicles will be cruising the highways, multimodal AI will create audio, video, text and images in a single platform and AI assistants will help users navigate their personal lives and careers.
O Robô Optimus, desenvolvido pela Tesla sob a liderança de Elon Musk, está projetado para substituir humanos em tarefas repetitivas. Por isso, esse avanço tecnológico promete aliviar rotinas cansativas e transformar indústrias, comércio e até o cuidado com animais de estimação. Assim, neste artigo, você entenderá como essa inovação impactará o dia a dia e a tão polêmica escala 6×1.
O Robô Optimus é a solução ideal para famílias ocupadas. Isso porque ele oferece habilidades como lavar balcões, regar plantas e até fazer compras, o que o torna indispensável para a rotina doméstica. Além disso, sua capacidade de aprender e adaptar-se às necessidades do lar é um diferencial significativo que Elon Musk faz questão de destacar.
Seja passeando com o cachorro, alimentando gatos ou monitorando a saúde de outros pets, o Robô Optimus desempenha essas tarefas com precisão e responsabilidade. Dessa maneira, ele pode manter uma programação rigorosa de cuidados, algo que muitas vezes é difícil para donos atarefados ou que possuem rotinas imprevisíveis.
Diferente dos humanos, o Robô Optimus não reclama, não cansa e não exige benefícios como férias ou folgas. Dessa forma, sua programação permite que ele funcione 24/7, sem interrupções. Essa característica é especialmente relevante no Brasil, onde debates sobre a escala 6×1 dividem opiniões e causam tensão em vários setores.
Empregadores poderão contar com o Optimus para tarefas monótonas, garantindo produtividade ininterrupta. Ainda que existam preocupações com a substituição de trabalhadores, o robô é visto como uma solução eficaz para reduzir custos em setores industriais e comerciais.
Por conta disso, acredita-se que o robô vai aliviar o tempo gasto em tarefas cotidianas, permitindo que as pessoas se concentrem em atividades mais importantes e prazerosas. Afinal, quem não gostaria de voltar do trabalho e encontrar a casa impecável sem mover um dedo?
Em 1974, a revista norte-americana “Saturday Review”, que completava 50 anos, lançou edições especiais comemorativas que refletiam sobre como o século 19 havia conduzido o mundo a um presente sombrio.
https://img.inleo.io/DQmYDZSQEQDCq5uqPR8Hccwtv9C9VspiMgsZDoBtCej68fi/1000053171.webp
Na época, a sociedade enfrentava o escândalo de Watergate, o risco iminente de uma guerra nuclear, uma explosão demográfica que parecia ameaçar os recursos e a energia do planeta, entre outras crises globais.
Para fechar as comemorações, a revista apostou em uma edição mais ousada, repleta de previsões e ideias – algumas até excêntricas – sobre como seria o mundo meio século depois.
Uma dessas edições foi inteiramente dedicada a pensar em “como o mundo poderia recuperar sua confiança”. Elas foram organizadas por Norman Cousins, editor de longa data da revista, que também era um renomado intelectual – e talvez o maior otimista do país na época.
Recentemente, o otimismo de Cousins chegou à minha casa pelo correio. Um amigo editor encontrou uma cópia dessa edição especial e me enviou, pois sabia que, como alguém que escreve sobre o futuro, eu ficaria curioso para ver essas antigas previsões.
Cousins reuniu um time de grandes pensadores da época, incluindo Andrei Sakharov, Neil Armstrong, Jacques Cousteau, Isaac Asimov e Clare Boothe Luce (a única mulher do grupo). Alguns dos problemas que eles esperavam que a humanidade superasse foram de fato resolvidos, e essas preocupações deram lugar a outras.
Hoje, há preocupações com as epidemias de obesidade e os danos causados pelo excesso de combustíveis fósseis. Apenas duas das grandes empresas de petróleo e gás do mundo (Saudi Aramco e Exxon), são mais valiosas do que a Novo Nordisk, fabricante dos medicamentos para perda de peso Ozempic e Wegovy.
As nações agora entram em pânico com a queda mundial das taxas de natalidade, o encolhimento e o envelhecimento da população. Esta tendência, aliás, é uma que já analisei detalhadamente em dois livros focados no futuro. Previ que as taxas de natalidade em queda afetariam o mundo inteiro.
O Grupo Adani, maior importador de carvão da Índia e um dos principais mineradores do combustível, foi fundado em 1988 e conta com negócios em áreas que vão desde portos e usinas de energia térmica até cimentos. Sua unidade de energia limpa, a AGEL, está desenvolvendo uma ampla usina solar e eólica no estado de Gujarat, no oeste da Índia, a um custo de US$ 20 bilhões.
Com cinco vezes o tamanho de Paris e visível do espaço, a maior usina de energia limpa do mundo está em construção, destinada a fornecer eletricidade suficiente para abastecer toda a Suíça. A magnitude da usina solar e eólica que está sendo desenvolvida nas áreas áridas do deserto de sal, nos limites do oeste indiano, a torna uma das mais significativas fontes de energia limpa do planeta. A escala do projeto é tão vasta que até mesmo os responsáveis têm dificuldade de acompanhar seu ritmo.
Nas próximas três décadas, a economia em rápida expansão terá o maior crescimento de demanda de energia de qualquer país do mundo, segundo a AIE.
A Índia é o terceiro maior país consumidor de energia do mundo, apesar do seu uso de energia e suas emissões por pessoa serem menos da metade da média mundial, segundo dados da Agência Internacional de Energia (AIE), sediada em Paris. Isso pode mudar rapidamente, visto que, graças ao aumento da renda, a demanda de energia dobrou desde 2000, sendo que 80% dela ainda é atendida por carvão, petróleo e biomassa sólida.
A Índia está confortavelmente posicionada para crescer a uma taxa anual de pelo menos 6% nos próximos anos, segundo especialistas, e pode se tornar a terceira maior economia do mundo antes do final desta década. Com certeza, a maior usina de energia solar e eólica contribuirá com isso.
Até o presente momento, o empreendimento foi classificado como o maior parque de energia limpa do mundo e será finalizado em cerca de 5 anos, gerando eletricidade suficiente para abastecer 16 milhões de residências indianas.
O sucesso do Parque de Energia Renovável Khavda é essencial para os esforços da Índia de reduzir a poluição e alcançar suas metas climáticas, ao mesmo tempo em que atende às crescentes necessidades de energia da nação mais populosa do mundo e da economia de crescimento mais rápida. Localizada a apenas 12 milhas de uma das fronteiras mais perigosas do mundo, que separa a Índia e o Paquistão, a maior usina de energia solar e eólica do mundo cobrirá mais de 200 milhas quadradas.
Segundo Sagar Adani, diretor-executivo da Adani Green Limited (AGEL), em uma região tão grande, livre de obstáculos, sem vida selvagem, sem vegetação e habitação, não há alternativa melhor para o uso dessa terra. O pivô de energia limpa do Adani Group acontece em um momento em que a Índia estabeleceu algumas metas climáticas ambiciosas. O primeiro-ministro Narendra Modi prometeu que as fontes renováveis, como a energia solar e a eólica, atenderão a 50% das necessidades energéticas da Índia até o final desta década.
O governo estabeleceu uma meta de 500 gigawatts de capacidade de geração de eletricidade a partir de combustíveis não fósseis até 2030. A AGEL tem como foco fornecer pelo menos 9% desse valor, com quase 30 GW gerados apenas em sua usina solar e eólica de Khavda, em Gujarat.
Segundo Adani, deixar de fazer a transição para a energia renovável não é uma opção. Não há outra opção para a Índia a não ser começar a fazer coisas em tamanho e escala nunca antes imaginados.
O Wi-Fi 8 pode começar a ser implementado em 2028, mas sem grandes ganhos de velocidade. Em vez disso, a nova geração da internet wireless teria como seu foco principal oferecer maior estabilidade aos usuários. É isso que indica um documento da MediaTek, que sites especializados nos EUA tiveram a chance de observar.
https://img.inleo.io/DQmf8H1GwUQLMBs5gbrWbFq69EAZXoveUNhPPbJDzQjxbK6/1000053172.webp
Enquanto o Wi-Fi 7 ainda não é uma tecnologia absolutamente difundida, empresas já trabalham na próxima geração da conexão. Porém, diferente da tecnologia atual (802.11be), não devemos ver uma nova banda de conexão ou taxas de transferência maiores no Wi-Fi 8 (802.11bn).
Dessa forma, a nova versão da internet sem fio deve ainda usar as bandas de 2GHz, 4GHz, 5GHz ou 6GHz, com a mesma modulação 4096 QAM e frequência máxima de 320MHz. A taxa de transferência máxima estimada para o Wi-Fi 7 é de 23Gbps, que deve se manter também.
Para oferecer maior estabilidade, segundo a documentação da MediaTek, o Wi-Fi 8 implementa tecnologias Coordinated Spatial Reuse (Co-SR), Coordinated Beamforming (Co-BF), Dynamic Sub-Channel Operation (DSO) e uma versão melhorada de Modulation Coding Scheme (MCS).
A tecnologia Co-SR deve ajudar em áreas com muita densidade de objetos, como escritórios. Ela permite um ajuste dinâmico para os dispositivos quando a força do sinal oscila, e pode melhorar a eficiência geral do sistema em 15% a 25% segundo os testes da MediaTek.
O recurso Co-BF deve ajudar da mesma maneira, coordenando a direção do sinal de internet em áreas com múltiplos acessos. Essa tecnologia deve melhorar a “triagem”, evitando mandar o sinal para lugares onde ele não é necessário e se concentrando nas áreas com mais atividade. Os testes mostram melhorias de 20% a 50% em redes mesh compartilhadas.
O uso de DSO no Wi-Fi 8 vai permitir que a rede crie sub-canais com base nas necessidades de cada dispositivo, a fim de melhorar a eficiência geral da transmissão e evitar gargalos. Em dispositivos avançados a MediaTek conseguiu melhorias de até 80%.
Por fim, o MSC pode ajudar em suavizar as transições de conexão em dispositivos móveis, conforme o usuário se movimenta por diferentes áreas cobertas pelo sinal. Testes mostram ganho de banda entre 5% e 30%.
Hi, @taskmaster4450le,
This post has been voted on by @darkcloaks because you are an active member of the Darkcloaks gaming community.
Get started with Darkcloaks today, and follow us on Inleo for the latest updates.
Part 1/3:
The introduction of generative AI tools like ChatGPT has led to a 21% decrease in the weekly number of posts for automation-prone jobs compared to manual-intensive jobs.
Writing jobs were affected the most, with a 30% decrease, followed by software app/web development (20%) and software engineering (10%).
There are no signs of demand rebounding, revealing a growing trend of job replacement by generative AI.
The number of bids submitted by freelancers to secure job posts in automation-prone jobs rose by 8.5% after the introduction of ChatGPT.
In the author's personal experience, this increase was as high as 400%, meaning much more competition for the remaining jobs.
[...]
Part 2/3:
Jobs that include ChatGPT in their skill requirements saw an increase, with over 88% of these jobs falling into automation-prone categories.
The ability to integrate AI tools into work is becoming increasingly valued, and workers are updating their skill sets to include generative AI capabilities.
Reskilling and augmenting human potential with AI is essential for navigating the evolving job market.
The next wave of automation may target computer operations, with autonomous agents potentially taking over a wide range of tasks.
Areas like healthcare, physics, and mathematics may be the most affected, but the focus should be on using these tools to enhance human potential rather than replace it.
[...]
Part 3/3:
Those who can adapt and learn to effectively integrate these tools into their work will be better positioned to thrive in the evolving job market.
Reskilling and embracing the augmentation of human potential with AI will be crucial for organizations and individuals to succeed in the future.
This video has already been summarized: https://inleo.io/threads/view/mightpossibly/re-taskmaster4450le-2kuulkrbd
Part 1/5:
In a surprising turn of events, it appears that OpenAI may have accidentally released their highly anticipated GPT-4 model, known as "01", ahead of schedule. This model, which is said to possess advanced reasoning and creativity capabilities, has been the subject of much speculation and anticipation within the AI community.
The discovery of this accidental release began with a tweet from a user called "me and jcraft 39", who shared an interaction with a model that exhibited impressive image analysis capabilities. The user described how the model was able to "think about the image for 7 seconds" before providing a detailed response.
Part 2/5:
Further investigation revealed that this model was not the previously available GPT-4 "01 preview" model, but rather the full-fledged GPT-4 "01" model. This was confirmed when another user, T or blaho, discovered evidence of the model's advanced capabilities, including the ability to analyze images and provide detailed explanations.
The key difference between the 01 and 01 preview models is their level of sophistication. While the 01 preview model has been available for some time, the 01 model is a significant upgrade, with the ability to outperform the 01 preview across a wide range of tasks.
Part 3/5:
One example of this difference was demonstrated in a simple reasoning benchmark test. The 01 preview model consistently failed to provide the correct answer, while the 01 model was able to correctly identify the relationship between the balls in the scenario.
The accidental release of the 01 model has provided a rare glimpse into its advanced capabilities. Users have been able to interact with the model and witness its impressive image analysis skills, with one user even conducting a side-by-side comparison with GPT-4.
Part 4/5:
The results of this comparison were intriguing, as the 01 model demonstrated a more detailed and nuanced understanding of the image, breaking down the various elements and patterns. However, it is worth noting that the model did not provide the correct answer to the specific question asked, highlighting the ongoing challenges in developing AI systems with truly human-level reasoning.
The accidental release of the 01 model has significant implications for the future of AI. It suggests that OpenAI may be closer to achieving their goal of developing a model with advanced reasoning and creativity capabilities than previously thought.
Part 5/5:
Furthermore, the rapid progress in areas like image analysis indicates that the field of AI is advancing at a rapid pace, and that we may soon see even more impressive breakthroughs in the near future. As Sam Altman, the CEO of OpenAI, has hinted, the release of the full 01 model may not be too far away, potentially as soon as next week.
Overall, the accidental release of the 01 model has provided a tantalizing glimpse into the future of AI, and has left the AI community eagerly anticipating the official unveiling of this powerful new technology.
#technology #ai #llm !summarize
Part 1/7:
Over the past few years, large language models (LLMs) like GPT-4, Claude 3.5, and Sonic have become incredibly powerful tools, capable of generating human-like text, answering complex questions, coding, tutoring, and even engaging in philosophical debates. These models have set new benchmarks for AI capabilities.
Part 2/7:
However, there's a catch. As these models become more sophisticated, they also become more resource-intensive. Scaling up model parameters, which essentially means making them larger and more complex, requires enormous amounts of compute power. This translates to higher costs, more energy consumption, and greater latency, especially when deploying these models in real-time or edge environments.
Test time compute refers to the computational effort used by a model when generating outputs, rather than during its training phase. As most large language models are designed to be incredibly powerful right out of the gate, they need to be big - really big. But this "bigger is better" approach comes with significant costs.
Part 3/7:
The dominant strategy over the past few years has been to simply make the models bigger, by increasing the number of parameters. This method has proven effective, but it comes with its own challenges. On the other hand, optimizing test time compute offers a more strategic alternative. Instead of relying on massive models, we could deploy smaller, more efficient models that use additional computation selectively during inference to improve their outputs.
The researchers have developed two main mechanisms to scale up compute during the models' usage phase without needing to scale up the model itself:
Part 4/7:
Verifier Reward Models: These are separate models that evaluate or verify the steps taken by the main language model when it tries to solve a problem. This process-based approach helps the model become more accurate by ensuring that every part of its reasoning is sound.
Adaptive Response Updating: This allows the model to adapt and refine its answers on the fly based on what it learns as it goes. Instead of just spitting out one answer, the model revises its response multiple times, taking into account what it got right and wrong in the previous attempts.
Part 5/7:
The researchers call this approach "compute optimal scaling," which is about being smart with how we use computing power. Instead of using a fixed amount of compute for every single problem, this strategy allocates compute resources dynamically based on the difficulty of the task or prompt.
To evaluate the effectiveness of these new techniques, the researchers used the math benchmark, a collection of high school-level math problems designed to test deep reasoning and problem-solving skills. This data set was chosen because it is a perfect challenge for large language models, requiring not only the right answer but also an understanding of the steps needed to get there.
Part 6/7:
The researchers used fine-tuned versions of Google's Pathways Language Model (Palm 2), which was specifically trained for revision and verification tasks. This allowed the model to be highly skilled at refining responses and verifying solutions, crucial abilities for optimizing test time compute.
The results show that using the compute optimal scaling strategy, models can achieve similar or even better performance while using four times less computation compared to traditional methods. In some cases, a smaller model using this strategy can even outperform a model that is 14 times larger.
Part 7/7:
This research, along with Open AI's recent 01 model release, demonstrates that by optimizing how and where computation is used, AI models can achieve high performance without needing to be excessively large. This allows for more efficient models that perform at or above the level of much bigger ones by being strategic about their computational power. The future of AI seems to be shifting away from the "scale is all you need" paradigm, towards more efficient ways to get smarter models.