A cinematographer makes an attempt to preform a digital vs. film comparison. In a very particular way. I think he fails to reach the goal he set out for himself, but I don’t believe that the theory he presents is incorrect. In a few years the iteration in digital imaging tech, and the math to preform the transforms will progress, and we will be there. Then this silly debate will be over :)
People are so religious about this that they’re resistant to even trying. But not trying does not prove it’s not possible.
If you believe there are attributes that haven’t been identified and/or properly modeled in my personal film emulation, then that means you believe those attributes exist. If they exist, they can be identified. If they don’t exist, then, well, they don’t exist and the premise is false.
It just doesn’t seem like a real option that these attributes exist but can never be identified and are effectively made out of intangible magic that can never be understood or studied.
To insist that film is pure magic and to deny the possibility of usefully modeling its properties would be like saying to Kepler in 1595 as he tried to study the motion of the planets: “Don’t waste your time, no one can ever understand the ways of God so don’t bother. You’ll never be able to make an accurately predictive mathematical model of the crazy motions of the planets — they just do whatever they do.”
Professional pilot Ron Rapp has written a fascinating article on a 2014 Gulfstream plane that crashed on takeoff. The accident was 100% human error and entirely preventable – the pilots ignored procedures and checklists and warning signs again and again. Rapp uses it as example of what systems theorists call the “normalization of deviance.”
The point is that normalization of deviance is a gradual process that leads to a situation where unacceptable practices or standards become acceptable, and flagrant violations of procedure become normal – despite that fact that everyone involved knows better.
I think this is a useful term for IT security professionals. I have long said that the fundamental problems in computer security are not about technology; instead, they’re about using technology. We have lots of technical tools at our disposal, and if technology alone could secure networks we’d all be in great shape. But, of course, it can’t. Security is fundamentally a human problem, and there are people involved in security every step of the way.
I have seen this personally many times, you can be sloppy several hundred/thousand/whatever times, and it doesn’t bite you, so you come to believe that being sloppy has no risk, and then boom out of “nowhere” a failure. When you look back and analyze the failure you will find that this complacency for the new normal of sloppy behavior is the root cause.
The behavior of the researchers is reprehensible, but the real issue is that CERT Coordination Center (CERT/CC) has lost its credibility as an honest broker. The researchers discovered this vulnerability and submitted it to CERT. Neither the researchers nor CERT disclosed this vulnerability to the Tor Project. Instead, the researchers apparently used this vulnerability to deanonymize a large number of hidden service visitors and provide the information to the FBI.
Does anyone still trust CERT to behave in the Internet’s best interests?
Here is the blurb from their website ipfs.io, but the best intro is the video included below . . .
IPFS is a peer-to-peer distributed file system that seeks to connect all computing devices with the same system of files. In some ways, IPFS is similar to the Web, but IPFS could be seen as a single BitTorrent swarm, exchanging objects within one Git repository. In other words, IPFS provides a high throughput content-addressed block storage model, with content-addressed hyperlinks. This forms a generalized Merkle DAG, a data structure upon which one can build versioned file systems, blockchains, and even a Permanent Web. IPFS combines a distributed hashtable, an incentivized block exchange, and a self-certifying namespace. IPFS has no single point of failure, and nodes do not need to trust each other.
At first glance this is an attempt to get CCN working, in the absolutely simplest way by patching together existing tech. I respect this, and it’s not a diss to say so. Having a working stack to play with will be valuable. However I think the hackyness of this approach will prove to be too much to actually implement, and it will fall back to a complete ground up stack like the NDN folks to get something that is performant and has features like mobile nodes built in.
“Every time I go to a new country, I buy a SIM card and activate the Internet and download the map to locate myself,” Osama Aljasem, a 32-year-old music teacher from Deir al-Zour in Syria, explained as he sat on a broken park bench in Belgrade, staring at his smartphone and plotting his next move into northern Europe.
“I would never have been able to arrive at my destination without my smartphone,” he added. “I get stressed out when the battery even starts to get low.”
Humans, quite simply, suck at walking in straight lines: Give a man a blindfold, and he will walk around in circles. In virtual reality, though, this failing becomes very convenient. People are so bad at knowing where they are in space that subtle visual cues can trick them into believing they’re exploring a huge area when they’ve never left the room—a process called redirected walking.
With VR technology getting ever better, people could one day explore immersive virtual spaces—like buildings or even whole cities—on foot in head-mounted displays. But it’s not very immersive if you end up smacking into a real-life wall. “This problem of how you move around when in VR is one of the big unsolved problems of the VR community,” says Evan Suma, a computer scientist at the University of Southern California.
That’s where redirected walking comes in. People don’t really notice, it turns out, if you increase or decrease the virtual distance they had to walk by 26 percent or 14 percent, respectively. Or shift the virtual room so they see their path as straight when their real path is curved. Or turn the room up to 49 percent more or 20 percent less than the rotation of their heads.
Nerd out time. Gets really good towards end of part 3.
The Named Data Networking (NDN) project makes use of the CCN (Content-Centric Networking) architecture developed at the Palo Alto Research Center (PARC). In this presentation, Van Jacobson speaks on content-centric networking at the Future Internet Summer School (FISS 09) in Bremen, Germany in June 2009.
First off, there’s been a subtle shift in the risk calculus around security vulnerabilities. Before, we used to say: “A flaw has been discovered. Fix it, before it’s too late.” In the case of Heartbleed, the presumption is that it’s already too late, that all information that could be extracted, has been extracted, and that pretty much everyone needs to execute emergency remediation procedures.
About two days ago, I was poking around with OpenSSL to find a way to mitigate Heartbleed. I soon discovered that in its default config, OpenSSL ships with exploit mitigation countermeasures, and when I disabled the countermeasures, OpenSSL stopped working entirely. That sounds pretty bad, but at the time I was too frustrated to go on. Last night I returned to the scene of the crime.
Articles last month revealed that musician Neil Young and Apple’s Steve Jobs discussed offering digital music downloads of ‘uncompromised studio quality’. Much of the press and user commentary was particularly enthusiastic about the prospect of uncompressed 24 bit 192kHz downloads. 24/192 featured prominently in my own conversations with Mr. Young’s group several months ago.
Unfortunately, there is no point to distributing music in 24-bit/192kHz format. Its playback fidelity is slightly inferior to 16/44.1 or 16/48, and it takes up 6 times the space.
There are a few real problems with the audio quality and ‘experience’ of digitally distributed music today. 24/192 solves none of them. While everyone fixates on 24/192 as a magic bullet, we’re not going to see any actual improvement.
See this fantastic video for a walk through of why stair stepping is a total myth.
(Yes you should still record and produce at 24-bit, due to headroom for not worrying about clipping. but mastering (a properly centered mix) to 16bit doesn’t lose anything).
Stephen Wolfram introduces the Wolfram Language in this video that shows how the symbolic programming language enables powerful functional programming, querying of large databases, flexible interactivity, easy deployment, and much, much more.
Dr. Bell’s title at Intel, the world’s largest producer of semiconductors, is director of user experience research at Intel Labs, the company’s research arm. She runs a skunk works of some 100 social scientists and designers who travel the globe, observing how people use technology in their homes and in public. The team’s findings help inform the company’s product development process, and are also often shared with the laptop makers, automakers and other companies that embed Intel processors in their goods.
In 1989, Shawn Rudiman started production work with Ed Vargo as part of the seminal Industrial group T.H.D. (Total Harmonic Distortion). This EBM/Elektro unit became quite popular in the EBM/Industrial music scene of the early to mid 90′s. They released 4 full-length albums, countless remixes and compilation releases on both European and domestic labels. During these formative years, Shawn developed a fascination with vintage music machines. In 1997, he decided to stray from his Industrial-EBM roots to explore the depths of pure rhythm and sounds in Techno music.
Rudiman’s all live sets of non-stop, improvised techno became his trademark. His innate understanding of hardware drum machines, sequencers, samplers and synthesizers gave his performances the fluidity and smoothness of any DJ set, but entirely flexible in direction and tempo (well before the introduction of software live applications). These performances gained international attention throughout the Techno community and became the stuff of legend.
Today he resides in the Midwest, still releasing records and remixes. Always a consummate studio enthusiast, Shawn maintains, repairs and builds analog and vintage synthesizers while keeping a busy international touring schedule.
Gender Swap is an experiment that uses themachinetobeanother.org/ system as a platform for embodiment experience (a neuroscience technique in which users can feel themselves like if they were in a different body). In order to create the brain ilusion we use the immersive Head Mounted Display Oculus Rift, and first-person cameras. To create this perception, both users have to synchronize their movements. If one does not correspond to the movement of the other, the embodiment experience does not work. It means that both users have to constantly agree on every movement they make. Through out this experiment, we aim to investigate issues like Gender Identity, Queer Theory, feminist technoscience, Intimacy and Mutual Respect.
Adam Magyar is a computer geek, a college dropout, a self-taught photographer, a high-tech Rube Goldberg, a world traveler, and a conceptual artist of growing global acclaim. But nobody had ever suggested that he might also be a terrorist until the morning that he descended into the Union Square subway station in New York.
More and more of our activity involves not merely the transmission of bits, but the transmission of bits according to algorithms and protocols created by humans and implemented by machines. Messages travel over the Internet because of transmission protocols, coding decisions determine the look and feel of websites, and algorithms determine which links, messages, or stories rise to the top of search engine results and web aggregators’ webpages. Most webpages have automated components, as do most online articles and all video games. Are these algorithm-based outputs “speech” for purposes of the First Amendment? That is, does the Free Speech Clause of the First Amendment apply to government regulation of these or other algorithm based changes to bits?