Bookmark
Python API - LLM
https://llm.datasette.io/en/stable/python-api.html, posted 16 Jul by peter in ai development free python software
LLM provides a Python API for executing prompts, in addition to the command-line interface.
[...]
To run a prompt against the
gpt-3.5-turbo
model, run this:import llm model = llm.get_model("gpt-3.5-turbo") model.key = 'YOUR_API_KEY_HERE' response = model.prompt("Five surprising names for a pet pelican") print(response.text())
Bookmark
ChatGPT Is a Blurry JPEG of the Web
https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web, posted Feb '23 by peter in ai cognition language opinion
This analogy to lossy compression is not just a way to understand ChatGPT’s facility at repackaging information found on the Web by using different words. It’s also a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large language models such as ChatGPT are all too prone. These hallucinations are compression artifacts, but—like the incorrect labels generated by the Xerox photocopier—they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world. When we think about them this way, such hallucinations are anything but surprising; if a compression algorithm is designed to reconstruct text after ninety-nine per cent of the original has been discarded, we should expect that significant portions of what it generates will be entirely fabricated.
Bookmark
MusicLM
https://google-research.github.io/seanet/musiclm/examples/, posted Jan '23 by peter in ai audio development music toread
Abstract We introduce MusicLM, a model generating high-fidelity music from text descriptions such as "a calming violin melody backed by a distorted guitar riff". MusicLM casts the process of conditional music generation as a hierarchical sequence-to-sequence modeling task, and it generates music at 24 kHz that remains consistent over several minutes. Our experiments show that MusicLM outperforms previous systems both in audio quality and adherence to the text description. Moreover, we demonstrate that MusicLM can be conditioned on both text and a melody in that it can transform whistled and hummed melodies according to the style described in a text caption. To support future research, we publicly release MusicCaps, a dataset composed of 5.5k music-text pairs, with rich text descriptions provided by human experts.
Bookmark
lucidrains/deep-daze: Simple command line tool for text to image generation
https://github.com/lucidrains/deep-daze, posted 2021 by peter in ai free graphics opensource python software
Bookmark
Magenta
https://magenta.tensorflow.org/, posted 2021 by peter in ai free music opensource software toread
An open source research project exploring the role of machine learning as a tool in the creative process.
Bookmark
EleutherAI - GPT-Neo
https://www.eleuther.ai/gpt-neo, posted 2021 by peter in ai free nlp opensource
GPT-Neo is the code name for a series of transformer-based language models loosely styled around the GPT architecture that we plan to train and open source. Our primary goal is to replicate a GPT-3 sized model and open source it to the public, for free.
Bookmark
Gödel's Incompleteness Theorem And Its Implications For Artificial Intelligence - deep ideas
www.deepideas.net/godels-incompleteness-theorem-and-its-implications-for-artificial-intelligence/, posted 2017 by peter in ai math toread
This text gives an overview of Gödel’s Incompleteness Theorem and its implications for artificial intelligence. Specifically, we deal with the question whether Gödel’s Incompleteness Theorem shows that human intelligence could not be recreated by a traditional computer.
Bookmark
Practical Deep Learning For Coders—18 hours of lessons for free
course.fast.ai/, posted 2017 by peter in ai development toread
Welcome to fast.ai’s 7 week course, Practical Deep Learning For Coders, Part 1, taught by Jeremy Howard (Kaggle’s #1 competitor 2 years running, and founder of Enlitic). Learn how to build state of the art models without needing graduate-level math—but also without dumbing anything down. Oh and one other thing… it’s totally free!
Bookmark
Tone Analyzer
https://tone-analyzer-demo.mybluemix.net/, posted 2016 by peter in ai demo language nlp online text writing
This service uses linguistic analysis to detect and interpret emotions, social tendencies, and language style cues found in text.
Bookmark
Single Artificial Neuron Taught to Recognize Hundreds of Patterns | MIT Technology Review
www.technologyreview.com/view/543486/single-artificial-neuron-taught-to-recognize-hundreds-of-patterns/?utm_campaign=socialsync&utm_medium=social-post&utm_source=facebook, posted 2015 by peter in ai cognition science
Hawkins and Ahmad now say they know what’s going on. Their new idea is that distal and proximal synapses play entirely different roles in the process of learning. Proximal synapses play the conventional role of triggering the cell to fire when certain patterns of connections crop up.
...
But distal synapses do something else. They also recognize when certain patterns are present, but do not trigger firing. Instead, they influence the electric state of the cell in a way that makes firing more likely if another specific pattern occurs. So distal synapses prepare the cell for the arrival of other patterns. Or, as Hawkins and Ahmad put it, these synapses help the cell predict what the next pattern sensed by the proximal synapses will be.