Security Posts
The best VPN trials of 2024: Expert tested and reviewed
We found the best VPN free trial offers so you can test a VPN's speed and reliability before you commit.
Categorías: Security Posts
Researchers spot cryptojacking attack that disables endpoint protections
Enlarge (credit: Getty Images)
Malware recently spotted in the wild uses sophisticated measures to disable antivirus protections, destroy evidence of infection, and permanently infect machines with cryptocurrency-mining software, researchers said Tuesday.
Key to making the unusually complex system of malware operate is a function in the main payload, named GhostEngine, that disables Microsoft Defender or any other antivirus or endpoint-protection software that may be running on the targeted computer. It also hides any evidence of compromise. “The first objective of the GhostEngine malware is to incapacitate endpoint security solutions and disable specific Windows event logs, such as Security and System logs, which record process creation and service registration,” said researchers from Elastic Security Labs, who discovered the attacks.
When it first executes, GhostEngine scans machines for any EDR, or endpoint protection and response, software that may be running. If it finds any, it loads drivers known to contain vulnerabilities that allow attackers to gain access to the kernel, the core of all operating systems that’s heavily restricted to prevent tampering. One of the vulnerable drivers is an anti-rootkit file from Avast named aswArPots.sys. GhostEngine uses it to terminate the EDR security agent. A malicious file named smartscreen.exe then uses a driver from IObit named iobitunlockers.sys to delete the security agent binary.Read 10 remaining paragraphs | Comments
Categorías: Security Posts
Why Your Wi-Fi Router Doubles as an Apple AirTag
Image: Shutterstock.
Apple and the satellite-based broadband service Starlink each recently took steps to address new research into the potential security and privacy implications of how their services geo-locate devices. Researchers from the University of Maryland say they relied on publicly available data from Apple to track the location of billions of devices globally — including non-Apple devices like Starlink systems — and found they could use this data to monitor the destruction of Gaza, as well as the movements and in many cases identities of Russian and Ukrainian troops.
At issue is the way that Apple collects and publicly shares information about the precise location of all Wi-Fi access points seen by its devices. Apple collects this location data to give Apple devices a crowdsourced, low-power alternative to constantly requesting global positioning system (GPS) coordinates.
Both Apple and Google operate their own Wi-Fi-based Positioning Systems (WPS) that obtain certain hardware identifiers from all wireless access points that come within range of their mobile devices. Both record the Media Access Control (MAC) address that a Wi-FI access point uses, known as a Basic Service Set Identifier or BSSID.
Periodically, Apple and Google mobile devices will forward their locations — by querying GPS and/or by using cellular towers as landmarks — along with any nearby BSSIDs. This combination of data allows Apple and Google devices to figure out where they are within a few feet or meters, and it’s what allows your mobile phone to continue displaying your planned route even when the device can’t get a fix on GPS.
With Google’s WPS, a wireless device submits a list of nearby Wi-Fi access point BSSIDs and their signal strengths — via an application programming interface (API) request to Google — whose WPS responds with the device’s computed position. Google’s WPS requires at least two BSSIDs to calculate a device’s approximate position.
Apple’s WPS also accepts a list of nearby BSSIDs, but instead of computing the device’s location based off the set of observed access points and their received signal strengths and then reporting that result to the user, Apple’s API will return the geolocations of up to 400 hundred more BSSIDs that are nearby the one requested. It then uses approximately eight of those BSSIDs to work out the user’s location based on known landmarks.
In essence, Google’s WPS computes the user’s location and shares it with the device. Apple’s WPS gives its devices a large enough amount of data about the location of known access points in the area that the devices can do that estimation on their own.
That’s according to two researchers at the University of Maryland, who theorized they could use the verbosity of Apple’s API to map the movement of individual devices into and out of virtually any defined area of the world. The UMD pair said they spent a month early in their research continuously querying the API, asking it for the location of more than a billion BSSIDs generated at random.
They learned that while only about three million of those randomly generated BSSIDs were known to Apple’s Wi-Fi geolocation API, Apple also returned an additional 488 million BSSID locations already stored in its WPS from other lookups.
UMD Associate Professor David Levin and Ph.D student Erik Rye found they could mostly avoid requesting unallocated BSSIDs by consulting the list of BSSID ranges assigned to specific device manufacturers. That list is maintained by the Institute of Electrical and Electronics Engineers (IEEE), which is also sponsoring the privacy and security conference where Rye is slated to present the UMD research later today.
Plotting the locations returned by Apple’s WPS between November 2022 and November 2023, Levin and Rye saw they had a near global view of the locations tied to more than two billion Wi-Fi access points. The map showed geolocated access points in nearly every corner of the globe, apart from almost the entirety of China, vast stretches of desert wilderness in central Australia and Africa, and deep in the rainforests of South America.
A “heatmap” of BSSIDs the UMD team said they discovered by guessing randomly at BSSIDs.
The researchers said that by zeroing in on or “geofencing” other smaller regions indexed by Apple’s location API, they could monitor how Wi-Fi access points moved over time. Why might that be a big deal? They found that by geofencing active conflict zones in Ukraine, they were able to determine the location and movement of Starlink devices used by both Ukrainian and Russian forces.
The reason they were able to do that is that each Starlink terminal — the dish and associated hardware that allows a Starlink customer to receive Internet service from a constellation of orbiting Starlink satellites — includes its own Wi-Fi access point, whose location is going to be automatically indexed by any nearby Apple devices that have location services enabled.
A heatmap of Starlink routers in Ukraine. Image: UMD.
The University of Maryland team geo-fenced various conflict zones in Ukraine, and identified at least 3,722 Starlink terminals geolocated in Ukraine.
“We find what appear to be personal devices being brought by military personnel into war zones, exposing pre-deployment sites and military positions,” the researchers wrote. “Our results also show individuals who have left Ukraine to a wide range of countries, validating public reports of where Ukrainian refugees have resettled.”
In an interview with KrebsOnSecurity, the UMD team said they found that in addition to exposing Russian troop pre-deployment sites, the location data made it easy to see where devices in contested regions originated from.
“This includes residential addresses throughout the world,” Levin said. “We even believe we can identify people who have joined the Ukraine Foreign Legion.”
A simplified map of where BSSIDs that enter the Donbas and Crimea regions of Ukraine originate. Image: UMD.
Levin and Rye said they shared their findings with Starlink in March 2024, and that Starlink told them the company began shipping software updates in 2023 that force Starlink access points to randomize their BSSIDs.
Starlink’s parent SpaceX did not respond to requests for comment. But the researchers shared a graphic they said was created from their Starlink BSSID monitoring data, which shows that just in the past month there was a substantial drop in the number of Starlink devices that were geo-locatable using Apple’s API.
UMD researchers shared this graphic, which shows their ability to monitor the location and movement of Starlink devices by BSSID dropped precipitously in the past month.
They also shared a written statement they received from Starlink, which acknowledged that Starlink User Terminal routers originally used a static BSSID/MAC:
“In early 2023 a software update was released that randomized the main router BSSID. Subsequent software releases have included randomization of the BSSID of WiFi repeaters associated with the main router. Software updates that include the repeater randomization functionality are currently being deployed fleet-wide on a region-by-region basis. We believe the data outlined in your paper is based on Starlink main routers and or repeaters that were queried prior to receiving these randomization updates.”
The researchers also focused their geofencing on the Israel-Hamas war in Gaza, and were able to track the migration and disappearance of devices throughout the Gaza Strip as Israeli forces cut power to the country and bombing campaigns knocked out key infrastructure.
“As time progressed, the number of Gazan BSSIDs that are geolocatable continued to decline,” they wrote. “By the end of the month, only 28% of the original BSSIDs were still found in the Apple WPS.”
Apple did not respond to requests for comment. But in late March 2024, Apple quietly tweaked its privacy policy, allowing people to opt out of having the location of their wireless access points collected and shared by Apple — by appending “_nomap” to the end of the Wi-Fi access point’s name (SSID). Adding “_nomap” to your Wi-Fi network name also blocks Google from indexing its location.
Apple updated its privacy and location services policy in March 2024 to allow people to opt out of having their Wi-Fi access point indexed by its service, by appending “_nomap” to the network’s name.
Rye said Apple’s response addressed the most depressing aspect of their research: That there was previously no way for anyone to opt out of this data collection.
“You may not have Apple products, but if you have an access point and someone near you owns an Apple device, your BSSID will be in [Apple’s] database,” he said. “What’s important to note here is that every access point is being tracked, without opting in, whether they run an Apple device or not. Only after we disclosed this to Apple have they added the ability for people to opt out.”
The researchers said they hope Apple will consider additional safeguards, such as proactive ways to limit abuses of its location API.
“It’s a good first step,” Levin said of Apple’s privacy update in March. “But this data represents a really serious privacy vulnerability. I would hope Apple would put further restrictions on the use of its API, like rate-limiting these queries to keep people from accumulating massive amounts of data like we did.”
The UMD researchers said they omitted certain details from their study to protect the users they were able to track, noting that the methods they used could present risks for those fleeing abusive relationships or stalkers.
“We observe routers move between cities and countries, potentially representing their owner’s relocation or a business transaction between an old and new owner,” they wrote. “While there is not necessarily a 1-to-1 relationship between Wi-Fi routers and users, home routers typically only have several. If these users are vulnerable populations, such as those fleeing intimate partner violence or a stalker, their router simply being online can disclose their new location.”
The researchers said Wi-Fi access points that can be created using a mobile device’s built-in cellular modem do not create a location privacy risk for their users because mobile phone hotspots will choose a random BSSID when activated.
“Modern Android and iOS devices will choose a random BSSID when you go into hotspot mode,” he said. “Hotspots are already implementing the strongest recommendations for privacy protections. It’s other types of devices that don’t do that.”
For example, they discovered that certain commonly used travel routers compound the potential privacy risks.
“Because travel routers are frequently used on campers or boats, we see a significant number of them move between campgrounds, RV parks, and marinas,” the UMD duo wrote. “They are used by vacationers who move between residential dwellings and hotels. We have evidence of their use by military members as they deploy from their homes and bases to war zones.”
A copy of the UMD research is available here (PDF).
Categorías: Security Posts
Microsoft's latest Windows 11 security features aim to make it 'more secure out of the box'
Many of these new Windows 11 security features and upgrades will be enabled by default. Here's why.
Categorías: Security Posts
Scanning without Scanning with NMAP (APIs FTW), (Tue, May 21st)
A year ago I wrote up using Shodan's API to collect info on open ports and services without actually scanning for them (Shodan's API for the (Recon) Win!). This past week I was trolling through the NMAP scripts directory, and imagine my surprise when I stumbled on shodan-api.nse.
So the network scanner we all use daily can be used to scan without actually scanning? Apparently yes! First the syntax:
nmap <target> --script shodan-api --script-args 'shodan-api.apikey=SHODANAPIKEY'
(note: use double quotes for script-args if you are doing this in Windows) This still does a basic scan of the target host though. To do this without scanning, without even sending any packets to your host, add: -sn do a ping scan (ie we're not doing a port scan)
-Pn Don't ping the host, just assume that it's online
Neat trick there eh? This essentially tells nmap to do nothing for each host in the target list, but don't forget that script we asked you to run! This also has the advantage of doing the "scan" even if the host is down (or doesn't return on a ping) Plus, just to be complete:
-n Don't even do DNS resolution
This way NMAP isn't sending anything to the host or even to hosts under the client's control (for instance if they happen to host their own DNS). If you're doing a whole subnet, or the output is large enough to scroll past your buffer, or if you want much (much) more useful output, add this to your script-args clause:
shodan-api.outfile=outputfile.csv Let's put this all together: nmap -sn -Pn -n www.cisco.com --script shodan-api --script-args "shodan-api.outfile=out.csv,shodan-api.apikey=<my-api-key-not-yours>" Starting Nmap 7.92 ( https://nmap.org ) at 2024-05-17 09:53 Eastern Daylight Time Nmap scan report for www.cisco.com (184.26.152.97) Host is up. Host script results: | shodan-api: Report for 184.26.152.97 (www.static-cisco.com, www.cisco.com, www.mediafiles-cisco.com, www-cloud-cdn.cisco.com, a184-26-152-97.deploy.static.akamaitechnologies.com) | PORT PROTO PRODUCT VERSION | 80 tcp AkamaiGHost |_443 tcp AkamaiGHost Post-scan script results: | shodan-api: Shodan done: 1 hosts up. |_Wrote Shodan output to: out.csv Nmap done: 1 IP address (1 host up) scanned in 1.20 seconds Neat eh? It collects the product and version info (when it can get it). The CSV file looks like this: IP,Port,Proto,Product,Version 184.26.152.97,80,tcp,AkamaiGHost, 184.26.152.97,443,tcp,AkamaiGHost, This file format is a direct import into a usable format in powershell, python or just about any tool you might desire, even Excel :-) Looking at a more "challenging" scan target: nmap -sn -Pn -n isc.sans.edu --script shodan-api --script-args "shodan-api.outfile=out.csv,shodan-api.apikey=<my-api-key-not-yours>" IP,Port,Proto,Product,Version 45.60.103.34,25,tcp,, 45.60.103.34,43,tcp,, 45.60.103.34,53,tcp,, 45.60.103.34,53,udp,, .. and so on. Look at line 4! If you've ever done a UDP scan, you know that it can take for-e-ver! Since this is just an api call, it collects both tcp and udp info from Shodan. How many ports are in the output?
type out.csv | wc -l
160 159 ports, that's how many! (subtract one for the header line) This would have taken a while with a regular port scan, but with a shodan query it finishes in how long? Post-scan script results: | shodan-api: Shodan done: 1 hosts up. |_Wrote Shodan output to: out.csv Nmap done: 1 IP address (1 host up) scanned in 1.20 seconds Yup, 1.2 seconds! This script is a great addition to nmap, it allows you to do a quick and dirty scan for what ports and services have been available recently, with a bit of rudimentary info attached. Did you catch that last hint? If you're doing a pentest, it's well worth digging into that word "recently". Looking at ports that are in the shodan list, but aren't in a real portscan (that you'd get from nmap -sT or -sU) can be very interesting. These are services that the client has recently disabled, maybe just for the duration of the pentest. For instance, that FTP server or totally vulnerable web or application server that they have open "only when they need it" (translation: always, except for during the annual pentest). If you can pull a diff report between what's in the shodan output and what's actually there now, that's well worth looking into, say for instance using archive.org. If you do find something good, my bet is that it falls into your scope! If not, you should update your scope to "services found during the test in the target IP ranges or DNS scopes" or similar. You don't want something like this excluded simply because it's (kinda) not there during the actual assessment :-) Got another API you'd like to see used in NMAP? Please use our comment form. Stay tuned I have a list, but if you've got one I haven't thought of I'm happy to add anohter one! ===============
Rob VandenBrink
rob<at>coherentsecurity.com (c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
So the network scanner we all use daily can be used to scan without actually scanning? Apparently yes! First the syntax:
nmap <target> --script shodan-api --script-args 'shodan-api.apikey=SHODANAPIKEY'
(note: use double quotes for script-args if you are doing this in Windows) This still does a basic scan of the target host though. To do this without scanning, without even sending any packets to your host, add: -sn do a ping scan (ie we're not doing a port scan)
-Pn Don't ping the host, just assume that it's online
Neat trick there eh? This essentially tells nmap to do nothing for each host in the target list, but don't forget that script we asked you to run! This also has the advantage of doing the "scan" even if the host is down (or doesn't return on a ping) Plus, just to be complete:
-n Don't even do DNS resolution
This way NMAP isn't sending anything to the host or even to hosts under the client's control (for instance if they happen to host their own DNS). If you're doing a whole subnet, or the output is large enough to scroll past your buffer, or if you want much (much) more useful output, add this to your script-args clause:
shodan-api.outfile=outputfile.csv Let's put this all together: nmap -sn -Pn -n www.cisco.com --script shodan-api --script-args "shodan-api.outfile=out.csv,shodan-api.apikey=<my-api-key-not-yours>" Starting Nmap 7.92 ( https://nmap.org ) at 2024-05-17 09:53 Eastern Daylight Time Nmap scan report for www.cisco.com (184.26.152.97) Host is up. Host script results: | shodan-api: Report for 184.26.152.97 (www.static-cisco.com, www.cisco.com, www.mediafiles-cisco.com, www-cloud-cdn.cisco.com, a184-26-152-97.deploy.static.akamaitechnologies.com) | PORT PROTO PRODUCT VERSION | 80 tcp AkamaiGHost |_443 tcp AkamaiGHost Post-scan script results: | shodan-api: Shodan done: 1 hosts up. |_Wrote Shodan output to: out.csv Nmap done: 1 IP address (1 host up) scanned in 1.20 seconds Neat eh? It collects the product and version info (when it can get it). The CSV file looks like this: IP,Port,Proto,Product,Version 184.26.152.97,80,tcp,AkamaiGHost, 184.26.152.97,443,tcp,AkamaiGHost, This file format is a direct import into a usable format in powershell, python or just about any tool you might desire, even Excel :-) Looking at a more "challenging" scan target: nmap -sn -Pn -n isc.sans.edu --script shodan-api --script-args "shodan-api.outfile=out.csv,shodan-api.apikey=<my-api-key-not-yours>" IP,Port,Proto,Product,Version 45.60.103.34,25,tcp,, 45.60.103.34,43,tcp,, 45.60.103.34,53,tcp,, 45.60.103.34,53,udp,, .. and so on. Look at line 4! If you've ever done a UDP scan, you know that it can take for-e-ver! Since this is just an api call, it collects both tcp and udp info from Shodan. How many ports are in the output?
type out.csv | wc -l
160 159 ports, that's how many! (subtract one for the header line) This would have taken a while with a regular port scan, but with a shodan query it finishes in how long? Post-scan script results: | shodan-api: Shodan done: 1 hosts up. |_Wrote Shodan output to: out.csv Nmap done: 1 IP address (1 host up) scanned in 1.20 seconds Yup, 1.2 seconds! This script is a great addition to nmap, it allows you to do a quick and dirty scan for what ports and services have been available recently, with a bit of rudimentary info attached. Did you catch that last hint? If you're doing a pentest, it's well worth digging into that word "recently". Looking at ports that are in the shodan list, but aren't in a real portscan (that you'd get from nmap -sT or -sU) can be very interesting. These are services that the client has recently disabled, maybe just for the duration of the pentest. For instance, that FTP server or totally vulnerable web or application server that they have open "only when they need it" (translation: always, except for during the annual pentest). If you can pull a diff report between what's in the shodan output and what's actually there now, that's well worth looking into, say for instance using archive.org. If you do find something good, my bet is that it falls into your scope! If not, you should update your scope to "services found during the test in the target IP ranges or DNS scopes" or similar. You don't want something like this excluded simply because it's (kinda) not there during the actual assessment :-) Got another API you'd like to see used in NMAP? Please use our comment form. Stay tuned I have a list, but if you've got one I haven't thought of I'm happy to add anohter one! ===============
Rob VandenBrink
rob<at>coherentsecurity.com (c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categorías: Security Posts
Eventbrite Promoted Illegal Opioid Sales to People Searching for Addiction Recovery Help
A WIRED investigation found thousands of Eventbrite posts selling escort services and drugs like Xanax and oxycodone—some of which the company’s algorithm recommended alongside addiction recovery events.
Categorías: Security Posts
Cómo se creó CodeName: "News Bender Project" con GenAI
En la presentación de RootedCON 2024 de este año presentamos News Bender Daily, un blog basado en la generación automática de contenido en medios digitales utilizando GenAI. La idea es tan sencilla que, gracias a la potencia de los LLM se puede re-escribir noticias de otros medios para tener siempre contenido fresco, además de hacerlo con el tono y el tinte que se quiera.
Figura 1: Cómo se creó CodeName: "News Bender Project" con GenAI
Esto es un arma maravillosa para el SEO, para el BlackSEO, para la distribución de Malware, para las FakeNews, o para la desinformación interesada. Hoy os voy a contar cómo lo hicimos, que no tiene mucho misterio una vez que entiendes cómo funciona.
Figura 2: News Bender Daily
El objetivo era ver cómo se podría crear un medio digital re-escribiendo noticias de otros para luego orientarlo a lo que se quisiera. Así que elegimos el tema de la tecnología, y seleccionamos una serie de blog a los que utilizar como fuentes de noticias, como The Hacker News, TechCrunch, Wired y The Verge. De todos ellos bebemos los RSS de noticias.
Figura 3: Carga de fuentes RSS para re-escribir las noticias
Después, el funcionamiento es bastante simple, se seleccionan las noticas a re-escribir - que es lo que hacen muchos escritores de medios digitales -, y se asignan a un escritor de nuestra plataforma, que no es nada más que una configuración de un agente GenAI.
Figura 4: Asignación de noticias a escritores GenAI
Estos agentes escritores/redactores de noticias, están definidos por una persona que no existe creada por una StyleGAN, y una forma de escribir que se utiliza para darle el tono a la re-escritura de la noticia que se busca.
Figura 5: Los escritores son agentes de GenAI caracterizados
Estos escritores, dados una vuelta, son los que utilizamos para convertir este proyecto en un medio de difusión de ideologías políticas, como vimos en el programa de televisión con Iker Jiménez y Carmen Porter donde creamos a nuestros "periodistas GenAI de desinformación política".
Figura 6: Creando a nuestro escritor
Para hacer la re-escritura de noticias, lo único que se hace es usar la potencia de los LLM multimodales, que va desde crear el título, elegir la categoría, diseñar la imagen, hasta poner los enlaces en las noticias que se buscan.
Figura 7: Agentes de GenAI re-escribiendo las noticias
Para ello, todo lo que tenemos que hacer, es pedirle al modelo LLM que nos haga las cosas y luego unirlas todas para publicar la noticia.
Figura 8: Le pedimos que nos haga el prompt parahacer la imagen de un párrafo de texto
Primero, le pedimos que nos haga el Prompt para Dall-E de la imagen que vamos a usar como cabecera de la noticia. Como veis, le metemos el Prompt en lenguaje natural. Para darle un toque, definicmos una serie de estilos para las imágenes, que nos de variedad y uniformidad al mismo tiempo.
Figura 9: Estilos para nuestras imágenes
Ahora vamos a empezar con el trabajo de escribir. Primero elegimos el título que le vamos a dar a esta noticia, así que hay que configurar el agente escritor con algo como esto que tenéis aquí. Como podéis ver, le pasamos el título orinal de la noticia.
Figura 10: Prompt para el título de la noticia
Ahora que nos re-escriba el texto de la noticia, siguiendo el estilo del agente que hemos seleccionado en el interfaz para escribir la noticia. Cuando lo escribimos automáticamente esto es una función de selección de autor que puede ser aleatoria, secuencial o por temática. Como tú quieras.
Figura 11: Aquí le pedimos que nos re-escriba la noticia(y que no salga el sitio original)
Le toca el turno a que le pidamos que esta noticia nos la re-escriba SEO-Friendly para tener mucho más impacto con nuestro medio digital. Aquí tenéis el prompt utilizado.
Figura 12: El Prompt para hacer la noticia SEO-Friendly
Como podéis ver, en la Figura 11 le hemos pasado el estilo que queremos que use para la re-escritura de la noticia. Esto es lo que se captura de la definición del agente, y que puede ser algo como esto que veis a continuación.
Figura 13: Definición de un estilo de escritura
Ahora vamos a decirle que nos ponga los enlaces en la noticia que nosotros hayamos seleccionado, o que nos interese. Esto, en una distribución de malware, o de BlackSEO, os podéis imaginar que es lo más importante.
Figura 14: Colocación de enlaces en la noticia
Y lo mismo para la elección de las negritas del texto de la noticia. Un pequeño prompt para hacer trabajar a GPT4 en el resaltado de los temas importantes de la noticia.
Figura 15: Elección de las negritas del texto
El resultado que se obtiene tras estos dos últimos procesados es el que se ve a continuación, donde tenemos enlaces y negritas dentro del mismo texto de la noticia. Siempre trabajando en formato JSON para luego poder publicarla directamente en el servidor de noticias.
Figura 16: Resultado de poner enlaces y negritas
Para terminar, vamos a elegir las categorías de las noticias, que esto tenemos que publicarlo en un WordPress, y necesitamos que estén todos los datos completos.
Figura 17: Elección de la categoría deda una lista de categorías del blog
Y listo. Una vez acabado esto, la notica está completa, se publica en el blog, tal y como podéis ver en la imagen siguiente.
Figura 18: Así nos quedaría una noticia
Después, todas las noticias se viralizan por las redes sociales para conseguir el máximo de alcance de cada una de ellas. Para ello, primero la publicamos en X (Twitter) automáticamente.
Figura 19: Sacando la noticia en Twitter (X)
Después, usamos por ejemplo el servicio de Tempos x Tweets de MyPublicInbox para conseguir que llegue mucho más lejos en esta red social.
Figura 2o: Tempos por Posts / Tweets en MyPublicInbox.Como veis, yo saco mis posts de El lado del mal por aquí.
Y dejamos que Internet haga su magia y la noticia acabe referencia y enlazada en el máximo número de sitios posibles para conseguir relevancia con este medio digital.
Figura 21: La noticia de Newsbenderdaily referenciada
Al final, con este ejemplo vemos lo fácil que es crearse un medio digital para manipular la información, conseguir relevancia o hacer cosas malas con los visitantes. Además, creemos que esto pone de manifiesto el valor del buen periodismo y no de copiar noticias de otros, lo que lleva a que esos periodistas puedan ser reemplazados por un "GenAI Script muy pequeñito".
¡Saludos Malignos!
Autor: Chema Alonso (Contactar con Chema Alonso)
Sigue Un informático en el lado del mal RSS 0xWord
- Contacta con Chema Alonso en MyPublicInbox.com
Figura 1: Cómo se creó CodeName: "News Bender Project" con GenAI
Esto es un arma maravillosa para el SEO, para el BlackSEO, para la distribución de Malware, para las FakeNews, o para la desinformación interesada. Hoy os voy a contar cómo lo hicimos, que no tiene mucho misterio una vez que entiendes cómo funciona.
Figura 2: News Bender Daily
El objetivo era ver cómo se podría crear un medio digital re-escribiendo noticias de otros para luego orientarlo a lo que se quisiera. Así que elegimos el tema de la tecnología, y seleccionamos una serie de blog a los que utilizar como fuentes de noticias, como The Hacker News, TechCrunch, Wired y The Verge. De todos ellos bebemos los RSS de noticias.
Figura 3: Carga de fuentes RSS para re-escribir las noticias
Después, el funcionamiento es bastante simple, se seleccionan las noticas a re-escribir - que es lo que hacen muchos escritores de medios digitales -, y se asignan a un escritor de nuestra plataforma, que no es nada más que una configuración de un agente GenAI.
Figura 4: Asignación de noticias a escritores GenAI
Estos agentes escritores/redactores de noticias, están definidos por una persona que no existe creada por una StyleGAN, y una forma de escribir que se utiliza para darle el tono a la re-escritura de la noticia que se busca.
Figura 5: Los escritores son agentes de GenAI caracterizados
Estos escritores, dados una vuelta, son los que utilizamos para convertir este proyecto en un medio de difusión de ideologías políticas, como vimos en el programa de televisión con Iker Jiménez y Carmen Porter donde creamos a nuestros "periodistas GenAI de desinformación política".
Figura 6: Creando a nuestro escritor
Para hacer la re-escritura de noticias, lo único que se hace es usar la potencia de los LLM multimodales, que va desde crear el título, elegir la categoría, diseñar la imagen, hasta poner los enlaces en las noticias que se buscan.
Figura 7: Agentes de GenAI re-escribiendo las noticias
Para ello, todo lo que tenemos que hacer, es pedirle al modelo LLM que nos haga las cosas y luego unirlas todas para publicar la noticia.
Figura 8: Le pedimos que nos haga el prompt parahacer la imagen de un párrafo de texto
Primero, le pedimos que nos haga el Prompt para Dall-E de la imagen que vamos a usar como cabecera de la noticia. Como veis, le metemos el Prompt en lenguaje natural. Para darle un toque, definicmos una serie de estilos para las imágenes, que nos de variedad y uniformidad al mismo tiempo.
Figura 9: Estilos para nuestras imágenes
Ahora vamos a empezar con el trabajo de escribir. Primero elegimos el título que le vamos a dar a esta noticia, así que hay que configurar el agente escritor con algo como esto que tenéis aquí. Como podéis ver, le pasamos el título orinal de la noticia.
Figura 10: Prompt para el título de la noticia
Ahora que nos re-escriba el texto de la noticia, siguiendo el estilo del agente que hemos seleccionado en el interfaz para escribir la noticia. Cuando lo escribimos automáticamente esto es una función de selección de autor que puede ser aleatoria, secuencial o por temática. Como tú quieras.
Figura 11: Aquí le pedimos que nos re-escriba la noticia(y que no salga el sitio original)
Le toca el turno a que le pidamos que esta noticia nos la re-escriba SEO-Friendly para tener mucho más impacto con nuestro medio digital. Aquí tenéis el prompt utilizado.
Figura 12: El Prompt para hacer la noticia SEO-Friendly
Como podéis ver, en la Figura 11 le hemos pasado el estilo que queremos que use para la re-escritura de la noticia. Esto es lo que se captura de la definición del agente, y que puede ser algo como esto que veis a continuación.
Figura 13: Definición de un estilo de escritura
Ahora vamos a decirle que nos ponga los enlaces en la noticia que nosotros hayamos seleccionado, o que nos interese. Esto, en una distribución de malware, o de BlackSEO, os podéis imaginar que es lo más importante.
Figura 14: Colocación de enlaces en la noticia
Y lo mismo para la elección de las negritas del texto de la noticia. Un pequeño prompt para hacer trabajar a GPT4 en el resaltado de los temas importantes de la noticia.
Figura 15: Elección de las negritas del texto
El resultado que se obtiene tras estos dos últimos procesados es el que se ve a continuación, donde tenemos enlaces y negritas dentro del mismo texto de la noticia. Siempre trabajando en formato JSON para luego poder publicarla directamente en el servidor de noticias.
Figura 16: Resultado de poner enlaces y negritas
Para terminar, vamos a elegir las categorías de las noticias, que esto tenemos que publicarlo en un WordPress, y necesitamos que estén todos los datos completos.
Figura 17: Elección de la categoría deda una lista de categorías del blog
Y listo. Una vez acabado esto, la notica está completa, se publica en el blog, tal y como podéis ver en la imagen siguiente.
Figura 18: Así nos quedaría una noticia
Después, todas las noticias se viralizan por las redes sociales para conseguir el máximo de alcance de cada una de ellas. Para ello, primero la publicamos en X (Twitter) automáticamente.
Figura 19: Sacando la noticia en Twitter (X)
Después, usamos por ejemplo el servicio de Tempos x Tweets de MyPublicInbox para conseguir que llegue mucho más lejos en esta red social.
Figura 2o: Tempos por Posts / Tweets en MyPublicInbox.Como veis, yo saco mis posts de El lado del mal por aquí.
Y dejamos que Internet haga su magia y la noticia acabe referencia y enlazada en el máximo número de sitios posibles para conseguir relevancia con este medio digital.
Figura 21: La noticia de Newsbenderdaily referenciada
Al final, con este ejemplo vemos lo fácil que es crearse un medio digital para manipular la información, conseguir relevancia o hacer cosas malas con los visitantes. Además, creemos que esto pone de manifiesto el valor del buen periodismo y no de copiar noticias de otros, lo que lleva a que esos periodistas puedan ser reemplazados por un "GenAI Script muy pequeñito".
¡Saludos Malignos!
Autor: Chema Alonso (Contactar con Chema Alonso)
Sigue Un informático en el lado del mal RSS 0xWord
- Contacta con Chema Alonso en MyPublicInbox.com
Categorías: Security Posts
ISC Stormcast For Tuesday, May 21st, 2024 https://isc.sans.edu/podcastdetail/8990, (Tue, May 21st)
(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
Categorías: Security Posts
Update: oledump.py Version 0.0.76
This new version of oledump brings updates to .msg plugins plugin_msg and plugin_msg_summary.
Plugin plugin_msg_summary can now produce JSON output for attachments (plugin option -J).
Plugin plugin_msg now parses porperty streams.
More details can be found in my SANS ISC diary entry “Analyzing MSG Files“.
oledump_V0_0_76.zip (http)
MD5: 908FF80DABA00544CB46EBC4C728A15B
SHA256: BFEC0099C35C4D761DC941AA72214444661B6D09C4C0A9B0DDA15DF86812536C
MD5: 908FF80DABA00544CB46EBC4C728A15B
SHA256: BFEC0099C35C4D761DC941AA72214444661B6D09C4C0A9B0DDA15DF86812536C
Categorías: Security Posts
Wireshark Lua Fixed Field Length Dissector: fl-dissector
I developed a Wireshark dissector (fl-dissector) in Lua to dissect TCP protocols with fixed field lengths. The dissector is controlled through protocol preferences and Lua script arguments.
The port number is an essential argument, if you don’t provide it, default port number 1234 will be used.
Example for TCP port 50500: -X lua_script1:port:50500.
The protocol name (default fldissector) can be changed with argument protocolname: -X lua_script1:protocolname:firmware.
The length of the fields can be changed via the protocol preferences dialog:
Field lengths are separated by a comma.
Field lengths can also be defined by Lua script argument fieldlengths, like this: -X lua_script1:fieldlengths:1,1,2:L,2:L.
When field lengths are defined via a Lua script argument, this argument takes precedence over the settings in the protocol preferences dialog. fieldlengths can also specify the field type, but only via Lua script argument, not via protocol preferences (this is due to a Lua script dissector design limitation: protocol preferences can only be read after dissector initialization, and fields have to be defined before dissector initialization). Field types are defined like this: length:type. Type can be L (or l) and defines a little-endian integer, or B (or b) and defines a big-endian integer. The length of the integer (8, 16, 24 or 32 its) is inferred from the fieldlength. Fields without a defined type ate byte fields.
The length of the last field is not specified, it contains all the remaining bytes (if any).
Field names are specified with Lua script argument fieldnames: -X lua_script1:fieldnames:Function,Direction,Counter,DataLength,Data.
fl_dissector_V0_0_1.zip (http)
MD5: F3DDC28F8D470DC4F9037644D3AF919A
SHA256: BF7406BCD36334E326BF4A6650DECD1D955EB4BD9D9563332AA4AE38507B29D4
MD5: F3DDC28F8D470DC4F9037644D3AF919A
SHA256: BF7406BCD36334E326BF4A6650DECD1D955EB4BD9D9563332AA4AE38507B29D4
Categorías: Security Posts
Understanding AddressSanitizer: Better memory safety for your code
By Dominik Klemba and Dominik Czarnota
This post will guide you through using AddressSanitizer (ASan), a compiler plugin that helps developers detect memory issues in code that can lead to remote code execution attacks (such as WannaCry or this WebP implementation bug). ASan inserts checks around memory accesses during compile time, and crashes the program upon detecting improper memory access. It is widely used during fuzzing due to its ability to detect bugs missed by unit testing and its better performance compared to other similar tools.
ASan was designed for C and C++, but it can also be used with Objective-C, Rust, Go, and Swift. This post will focus on C++ and demonstrate how to use ASan, explain its error outputs, explore implementation fundamentals, and discuss ASan’s limitations and common mistakes, which will help you grasp previously undetected bugs.
Finally, we share a concrete example of a real bug we encountered during an audit that was missed by ASan and can be detected with our changes. This case motivated us to research ASan bug detection capabilities and contribute dozens of upstreamed commits to the LLVM project. These commits resulted in the following changes:
- Extended container sanitization ASan API in LLVM16 by adding support for unaligned memory buffers and adding a function for double-ended contiguous containers. Thanks to that, since LLVM17, std::vector annotations work with all allocators by default.
- Added std::deque annotations in LLVM17. For details, check the libc++ 17 release notes.
- Added annotations for the long string case of std::string in LLVM18 (with all allocators by default). Check the libc++18 release notes for more details.
- We have recently upstreamed short string annotations (read about “short string optimization”), and there is a high probability that they will be included in libc++19, assuming no new concerns or issues arise. Keep an eye on the libc++19 release notes.
- Redzones are not added between variables in structures.
- Redzones are not added between array elements.
- Padding in structures is not poisoned (example).
- Access to allocated, but not yet used, memory in a container won’t be detected, unless the container annotates itself like C++’s std::vector, std::deque, or std::string (in some cases). Note that std::basic_string (with external buffers) and std::deque are annotated in libc++ (thanks to our patches) while std::string is also annotated in Microsoft C++ standard library.
- Incorrect access to memory managed by a custom allocator won’t raise an error unless the allocator performs annotations.
- Only suffixes of a memory granule may be poisoned; therefore, access before an unaligned object may not be detected.
- ASan may not detect memory errors if a random address is accessed. As long as the random number generator returns an addressable address, access won’t be considered incorrect
- ASan doesn’t understand context and only checks values in shadow memory. If a random address being accessed is annotated as some error in shadow memory, ASan will correctly report that error, even if its bug title may not make much sense.
- Because ASan does not understand what programs are intended to do, accessing an array with an incorrect index may not be detected if the resulting address is still addressable, as shown in figure 18.
- ASAN_POISON_MEMORY_REGION(addr, size)
- ASAN_UNPOISON_MEMORY_REGION(addr, size)
Categorías: Security Posts
2024 RSA Recap: Centering on Cyber Resilience
Cyber resilience is becoming increasingly complex to achieve with the changing nature of computing. Appropriate for this year’s conference theme, organizations are exploring “the art of the possible”, ushering in an era of dynamic computing as they explore new technologies. Simultaneously, as innovation expands and computing becomes more dynamic, more threats become possible – thus, the approach to securing business environments must also evolve.
As part of this year’s conference, I led a keynote presentation around the possibilities, risks, and rewards of cyber tech convergence. We explored the risks and rewards of cyber technology convergence and integration across network & security operations. More specifically, we looked into the future of more open, adaptable security architectures, and what this means for security teams.
LevelBlue Research Reveals New Trends for Cyber Resilience
This year, we also launched the inaugural LevelBlue Futures™ Report: Beyond the Barriers to Cyber Resilience. Led by Theresa Lanowitz, Chief Evangelist of AT&T Cybersecurity / LevelBlue, we hosted an in-depth session based on our research that examined the complexities of dynamic computing. This included an analysis of how dynamic computing merges IT and business operations, taps into data-driven decision-making, and redefines cyber resilience for the modern era. Some of the notable findings she discussed include:
- 85% of respondents say computing innovation is increasing risk, while 74% confirmed that the opportunity of computing innovation outweighs the corresponding increase in cybersecurity risk.
- The adoption of Cybersecurity-as-a-Service (CSaaS) is on the rise, with 32% of organizations opting to outsource their cybersecurity needs rather than managing them in-house.
- 66% of respondents share cybersecurity is an afterthought, while another 64% say cybersecurity is siloed. This isn’t surprising when 61% say there is a lack of understanding of cybersecurity at the board level.
Categorías: Security Posts
Sifting through the spines: identifying (potential) Cactus ransomware victims
Authored by Willem Zeeman and Yun Zheng Hu
This blog is part of a series written by various Dutch cyber security firms that have collaborated on the Cactus ransomware group, which exploits Qlik Sense servers for initial access. To view all of them please check the central blog by Dutch special interest group Cyberveilig Nederland [1]
The effectiveness of the public-private partnership called Melissa [2] is increasingly evident. The Melissa partnership, which includes Fox-IT, has identified overlap in a specific ransomware tactic. Multiple partners, sharing information from incident response engagements for their clients, found that the Cactus ransomware group uses a particular method for initial access. Following that discovery, NCC Group’s Fox-IT developed a fingerprinting technique to identify which systems around the world are vulnerable to this method of initial access or, even more critically, are already compromised.
Qlik Sense vulnerabilities
Qlik Sense, a popular data visualisation and business intelligence tool, has recently become a focal point in cybersecurity discussions. This tool, designed to aid businesses in data analysis, has been identified as a key entry point for cyberattacks by the Cactus ransomware group.
The Cactus ransomware campaign
Since November 2023, the Cactus ransomware group has been actively targeting vulnerable Qlik Sense servers. These attacks are not just about exploiting software vulnerabilities; they also involve a psychological component where Cactus misleads its victims with fabricated stories about the breach. This likely is part of their strategy to obscure their actual method of entry, thus complicating mitigation and response efforts for the affected organizations.
For those looking for in-depth coverage of these exploits, the Arctic Wolf blog [3] provides detailed insights into the specific vulnerabilities being exploited, notably CVE-2023-41266, CVE-2023-41265 also known as ZeroQlik, and potentially CVE-2023-48365 also known as DoubleQlik.
Threat statistics and collaborative action
The scope of this threat is significant. In total, we identified 5205 Qlik Sense servers, 3143 servers seem to be vulnerable to the exploits used by the Cactus group. This is based on the initial scan on 17 April 2024. Closer to home in the Netherlands, we’ve identified 241 vulnerable systems, fortunately most don’t seem to have been compromised. However, 6 Dutch systems weren’t so lucky and have already fallen victim to the Cactus group. It’s crucial to understand that “already compromised” can mean that either the ransomware has been deployed and the initial access artifacts left behind were not removed, or the system remains compromised and is potentially poised for a future ransomware attack.
Since 17 April 2024, the DIVD (Dutch Institute for Vulnerability Disclosure) and the governmental bodies NCSC (Nationaal Cyber Security Centrum) and DTC (Digital Trust Center) have teamed up to globally inform (potential) victims of cyberattacks resembling those from the Cactus ransomware group. This collaborative effort has enabled them to reach out to affected organisations worldwide, sharing crucial information to help prevent further damage where possible.
Identifying vulnerable Qlik Sense servers
Expanding on Praetorian’s thorough vulnerability research on the ZeroQlik and DoubleQlik vulnerabilities [4,5], we found a method to identify the version of a Qlik Sense server by retrieving a file called product-info.json from the server. While we acknowledge the existence of Nuclei templates for the vulnerability checks, using the server version allows for a more reliable evaluation of potential vulnerability status, e.g. whether it’s patched or end of support.
This JSON file contains the release label and version numbers by which we can identify the exact version that this Qlik Sense server is running.
Figure 1: Qlik Sense product-info.json file containing version information
Keep in mind that although Qlik Sense servers are assigned version numbers, the vendor typically refers to advisories and updates by their release label, such as “February 2022 Patch 3”.
The following cURL command can be used to retrieve the product-info.json file from a Qlik server:
curl -H "Host: localhost" -vk 'https://<ip>/resources/autogenerated/product-info.json?.ttf'
Note that we specify ?.ttf at the end of the URL to let the Qlik proxy server think that we are requesting a .ttf file, as font files can be accessed unauthenticated. Also, we set the Host header to localhost or else the server will return 400 - Bad Request - Qlik Sense, with the message The http request header is incorrect.
Retrieving this file with the ?.ttf extension trick has been fixed in the patch that addresses CVE-2023-48365 and you will always get a 302 Authenticate at this location response:
> GET /resources/autogenerated/product-info.json?.ttf HTTP/1.1
> Host: localhost
> Accept: */*
>
< HTTP/1.1 302 Authenticate at this location
< Cache-Control: no-cache, no-store, must-revalidate
< Location: https://localhost/internal_forms_authentication/?targetId=2aa7575d-3234-4980-956c-2c6929c57b71
< Content-Length: 0
<
Nevertheless, this is still a good way to determine the state of a Qlik instance, because if it redirects using 302 Authenticate at this location it is likely that the server is not vulnerable to CVE-2023-48365.
An example response from a vulnerable server would return the JSON file:
> GET /resources/autogenerated/product-info.json?.ttf HTTP/1.1
> Host: localhost
> Accept: */*
>
< HTTP/1.1 200 OK
< Set-Cookie: X-Qlik-Session=893de431-1177-46aa-88c7-b95e28c5f103; Path=/; HttpOnly; SameSite=Lax; Secure
< Cache-Control: public, max-age=3600
< Transfer-Encoding: chunked
< Content-Type: application/json;charset=utf-8
< Expires: Tue, 16 Apr 2024 08:14:56 GMT
< Last-Modified: Fri, 04 Nov 2022 23:28:24 GMT
< Accept-Ranges: bytes
< ETag: 638032013040000000
< Server: Microsoft-HTTPAPI/2.0
< Date: Tue, 16 Apr 2024 07:14:55 GMT
< Age: 136
<
{"composition":{"contentHash":"89c9087978b3f026fb100267523b5204","senseId":"qliksenseserver:14.54.21","releaseLabel":"February 2022 Patch 12","originalClassName":"Composition","deprecatedProductVersion":"4.0.X","productName":"Qlik Sense","version":"14.54.21","copyrightYearRange":"1993-2022","deploymentType":"QlikSenseServer"},
<snipped>
We utilised Censys and Google BigQuery [6] to compile a list of potential Qlik Sense servers accessible on the internet and conducted a version scan against them. Subsequently, we extracted the Qlik release label from the JSON response to assess vulnerability to CVE-2023-48365.
Our vulnerability assessment for DoubleQlik / CVE-2023-48365 operated on the following criteria:
We shared our fingerprints and scan data with the Dutch Institute of Vulnerability Disclosure (DIVD), who then proceeded to issue responsible disclosure notifications to the administrators of the Qlik Sense servers. Call to action Ensure the security of your Qlik Sense installations by checking your current version. If your software is still supported, apply the latest patches immediately. For systems that are at the end of support, consider upgrading or replacing them to maintain robust security. Additionally, to enhance your defences, it’s recommended to avoid exposing these services to the entire internet. Implement IP whitelisting if public access is necessary, or better yet, make them accessible only through secure remote working solutions. If you discover you’ve been running a vulnerable version, it’s crucial to contact your (external) security experts for a thorough check-up to confirm that no breaches have occurred. Taking these steps will help safeguard your data and infrastructure from potential threats. References
- The release label corresponds to vulnerability statuses outlined in the original ZeroQlik and DoubleQlik vendor advisories [7,8].
- The release label is designated as End of Support (EOS) by the vendor [9], such as “February 2019 Patch 5”.
- The release label date is post-November 2023, as the advisory states that “November 2023” is not affected.
- The server responded with HTTP/1.1 302 Authenticate at this location.
We shared our fingerprints and scan data with the Dutch Institute of Vulnerability Disclosure (DIVD), who then proceeded to issue responsible disclosure notifications to the administrators of the Qlik Sense servers. Call to action Ensure the security of your Qlik Sense installations by checking your current version. If your software is still supported, apply the latest patches immediately. For systems that are at the end of support, consider upgrading or replacing them to maintain robust security. Additionally, to enhance your defences, it’s recommended to avoid exposing these services to the entire internet. Implement IP whitelisting if public access is necessary, or better yet, make them accessible only through secure remote working solutions. If you discover you’ve been running a vulnerable version, it’s crucial to contact your (external) security experts for a thorough check-up to confirm that no breaches have occurred. Taking these steps will help safeguard your data and infrastructure from potential threats. References
- https://cyberveilignederland.nl/actueel/persbericht-samenwerkingsverband-melissa-vindt-diverse-nederlandse-slachtoffers-van-ransomwaregroepering-cactus ︎
- https://www.ncsc.nl/actueel/nieuws/2023/oktober/3/melissa-samenwerkingsverband-ransomwarebestrijding ︎
- https://arcticwolf.com/resources/blog/qlik-sense-exploited-in-cactus-ransomware-campaign/ ︎
- https://www.praetorian.com/blog/qlik-sense-technical-exploit/ ︎
- https://www.praetorian.com/blog/doubleqlik-bypassing-the-original-fix-for-cve-2023-41265/ ︎
- https://support.censys.io/hc/en-us/articles/360038759991-Google-BigQuery-Introduction ︎
- https://community.qlik.com/t5/Official-Support-Articles/Critical-Security-fixes-for-Qlik-Sense-Enterprise-for-Windows/ta-p/2110801 ︎
- https://community.qlik.com/t5/Official-Support-Articles/Critical-Security-fixes-for-Qlik-Sense-Enterprise-for-Windows/ta-p/2120325 ︎
- https://community.qlik.com/t5/Product-Lifecycle/Qlik-Sense-Enterprise-on-Windows-Product-Lifecycle/ta-p/1826335 ︎
Categorías: Security Posts
Cybersecurity Concerns for Ancillary Strength Control Subsystems
Additive manufacturing (AM) engineers have been incredibly creative in developing ancillary systems that modify a printed parts mechanical properties. These systems mostly focus on the issue of anisotropic properties of additively built components. This blog post is a good reference if you are unfamiliar with isotropic vs anisotropic properties and how they impact 3d printing. […]
The post Cybersecurity Concerns for Ancillary Strength Control Subsystems appeared first on BreakPoint Labs - Blog.
Categorías: Security Posts
Update on Naked Security
To consolidate all of our security intelligence and news in one location, we have migrated Naked Security to the Sophos News platform.
Categorías: Security Posts