Sears Home Services exposed millions of customer records through unsecured AI chatbot databases
8 mins read

Sears Home Services exposed millions of customer records through unsecured AI chatbot databases

The retail landscape in the United States has undergone a seismic shift over the past decade, with the once-ubiquitous Sears department stores largely vanishing from the American suburbs. However, while the physical storefronts have retreated, the brand’s infrastructure—specifically its massive appliance repair arm, Sears Home Services—remains a cornerstone of its operations. As the company has modernized its customer service approach by integrating artificial intelligence, a significant security lapse has exposed millions of its customers to potential identity theft, fraud, and privacy violations.

New research conducted by security analyst Jeremiah Fowler has unveiled that sensitive customer data, including chat logs, audio files, and text transcriptions, were left in publicly accessible, unencrypted databases. This oversight, which involved the company’s AI voice agent known as “Samantha” and its underlying “kAIros” technology, highlights the growing risks inherent in the rapid, and sometimes reckless, deployment of generative AI within corporate customer service divisions.

A Massive Data Exposure

The scale of the breach discovered by Fowler is significant. By the time the databases were secured in February, they contained approximately 3.7 million individual chat logs, alongside 1.4 million audio files and their corresponding text transcripts. These files spanned a period from 2024 to early 2025. The data contained within these logs included highly sensitive personal information, such as full customer names, residential addresses, primary phone numbers, and detailed inventories of home appliances.

Beyond the logistical data, the exposure included records of delivery appointments and specific repair service requests. For a company that claims to perform more than seven million appliance repairs annually, the volume of data handled by its digital assistants is substantial, making the lack of basic security protocols—such as encryption or password protection—a critical point of failure.

The Chronology of the Discovery

The discovery was made in early February when Jeremiah Fowler, a researcher with Black Hills Information Security, identified three separate, publicly accessible databases linked to Sears Home Services. Upon investigation, he found that these databases were not protected by any standard authentication measures, allowing anyone with the URL to access the contents.

Fowler immediately initiated a disclosure process, contacting representatives at Transformco, the parent company that manages the Sears brand and its home services division. According to Fowler, the databases were secured shortly after his initial notification. However, the exact duration of the exposure remains unknown, leaving a gap in the timeline where it is impossible to determine if malicious actors had accessed the information prior to the discovery.

In the aftermath of the disclosure, communication between the security researcher and the company proved inconsistent. While Fowler reported receiving an initial response from a staff member claiming he would be connected to a manager overseeing the Samantha AI chatbot, he states that no such contact was ever established, despite subsequent follow-up attempts. Transformco has since declined to respond to multiple inquiries regarding the incident or the internal processes that led to such a profound security lapse.

The Risks of Ambient Recording

Perhaps the most alarming aspect of the breach involves the audio files. Many of the 1.4 million recordings captured by the AI system extended far beyond the duration of a standard customer service interaction. In some instances, the recording sessions lasted up to four hours.

Analysis of these files revealed that customers often failed to terminate their calls, or the system continued to record long after the AI assistant had ceased its primary functions. Because the system was live, it captured ambient audio from inside the homes of unsuspecting customers. These recordings included background conversations, television audio, and other private household sounds. For customers who believed they were speaking to a secure, professional entity, the discovery that these hours of intimate, private home life were stored in an unencrypted, public database represents a severe breach of trust.

The Limitations of Generative AI

The leaked logs also provide a window into the consumer experience with the kAIros-powered Samantha agent, revealing a high level of frustration. The transcripts illustrate a recurring pattern where the AI fails to resolve issues, yet continues to deflect requests to speak with a human agent.

In one documented case, a customer repeated the phrase “Where’s my technician?” 28 times, eventually culminating in the exclamation, “You’re a computer.” Another transcript shows a user engaging in a 76-minute call, during which they requested human assistance just two minutes in. The AI responded with a scripted defense, stating that it was “fully equipped to address your needs efficiently,” only to later admit it was “facing some errors” and failing to fulfill the customer’s request.

This pattern of “AI-induced frustration” is a growing concern for companies looking to replace human labor with automation. When the technology fails to perform, the resulting degradation of the customer relationship can be significant, particularly when the system acts as a gatekeeper that prevents customers from reaching human support staff.

Broader Implications for Corporate Privacy

The Sears Home Services incident serves as a cautionary tale for corporations rushing to integrate generative AI. As companies attempt to lower overhead costs through automation, the lack of rigorous data governance can lead to catastrophic reputational damage.

Carissa Véliz, an associate professor at the University of Oxford and a scholar of privacy, suggests that the incident underscores a fundamental power imbalance. Consumers are often forced to interact with automated systems to receive essential services, yet they are rarely given the choice to opt out of data collection or recording. “In the long run, you want your customers to be safe and feel comfortable, not alienated and exploited,” Véliz notes.

Furthermore, the data exposed in the Sears breach is a goldmine for cybercriminals. Phishing operations thrive on the type of information found in these logs; knowing a customer’s address, phone number, and specific appliance brand allows attackers to craft highly convincing scams. A threat actor could pose as a legitimate Sears repair representative, referencing specific service history to extract further financial information or to gain entry to a home.

The Path Forward: Security vs. Efficiency

The incident raises urgent questions about the standards governing the use of AI in customer-facing roles. As of early 2025, there is no standardized federal regulatory framework in the United States that dictates how companies must secure the data collected by AI voice agents. Consequently, the burden of security rests entirely on the companies themselves.

Industry experts suggest that at a minimum, corporations must implement the following to prevent future incidents:

  1. End-to-End Encryption: All data, including audio and text logs, must be encrypted at rest and in transit.
  2. Access Control: Databases must be restricted behind robust authentication, with access limited only to authorized personnel.
  3. Data Minimization: Companies should only retain the minimum amount of data necessary to perform the service, and should not record ambient audio after a call has concluded.
  4. Human Escalation: Companies must provide a clear, accessible path to human intervention, ensuring that customers are not trapped in a loop with faulty AI.

For Transformco, the road to recovery involves addressing the trust deficit created by this exposure. Without a transparent explanation of how the lapse occurred and what steps are being taken to rectify the security posture of the kAIros system, the company risks alienating a core segment of its remaining customer base. The incident stands as a stark reminder that while technology can scale service, it cannot replace the necessity of professional, secure, and respectful data stewardship.

Leave a Reply

Your email address will not be published. Required fields are marked *