Learning resources
Concepts and explanations used throughout SignalMap.
Core economic ideas
Nominal prices are the money values you see at current market rates. Real prices adjust for inflation so you can compare buying power across time.
Indexing rescales a series so a chosen point (the base year) equals 100. Values above or below 100 show relative change rather than absolute levels.
PPP compares what money can buy across countries by using a common basket of goods. It approximates domestic buying power better than market exchange rates.
Price constraints affect how much you pay; quantity constraints affect how much you can sell or buy. Under sanctions, volume limits often matter more than price.
Inflation erodes the purchasing power of money over time. Exchange rates determine how much of one currency you get for another. Both shape how economic signals are interpreted.
A supply shock is when something suddenly changes how much can be produced or sold (e.g. war, crop failure). A demand shock is when something suddenly changes how much people want to buy. Both can move prices, often in different directions.
Elasticity measures how much quantity bought or sold changes when price changes. If a small price change leads to a big change in quantity, demand or supply is elastic; if quantity barely moves, it is inelastic.
Terms of trade measure how many imports a country can buy with a given amount of exports. When export prices rise relative to import prices, a country can buy more imports for the same exports.
Money & currency
When more money circulates in an economy without a matching increase in goods and services, each unit of money tends to buy less. Inflation is the general rise in prices that often follows.
In some countries, the official exchange rate differs from the rate at which people actually trade currency in the market. The parallel (or black market) rate often reflects what traders are willing to pay when official access is restricted.
Oil & commodities
Oil prices reflect the cost of a barrel of crude oil on world markets. They are a benchmark for energy costs and often signal broader economic and geopolitical conditions.
Brent crude and West Texas Intermediate (WTI) are two benchmark oil types. Brent is widely used for international pricing; WTI is a US benchmark. Both are traded on world markets.
USD/bbl (or USD per barrel) is the price of one barrel of oil in US dollars. A barrel is about 159 litres. It is the standard unit for crude oil pricing.
Energy & resources
When a country relies heavily on one or few commodities (e.g. oil) for revenue or exports, its economy is sensitive to that commodity's price and demand.
Revenue that swings sharply from year to year—common in commodity-dependent economies—makes it harder to plan budgets and can create boom or bust cycles.
Sanctions & constraints
Sanctions are restrictions on trade, finance, or other activities imposed by one country or group on another. They can target specific sectors, entities, or goods.
When exports are constrained, how much you can sell often matters more than the price. Volume reflects the actual bottleneck—what can be exported—rather than what the world price would allow.
Constraints & institutions
Capital controls are rules that limit how much money can move in or out of a country. They are used to stabilise exchange rates or protect reserves during crises.
When official rules restrict trade or access, people often create informal channels—outside official markets—to transact. These markets reflect what people are willing to pay when formal channels are unavailable.
How to read SignalMap charts
Indexing lets you compare different-scale series (e.g. Iran vs Turkey) on the same chart. Both start at 100; values above or below show relative change over time.
Some data (e.g. PPP, gold) are only available annually. Annual resolution smooths short-term noise and focuses on longer-term patterns.
Event markers provide context—political, economic, or geopolitical milestones. They are anchors for interpretation, not explanations of causality.
SignalMap displays patterns and context. It does not assert causality, predict outcomes, or claim that any event caused any observed change in the data.
Reading data responsibly
When two things move together, they are correlated. That does not mean one causes the other. Causation requires evidence that one thing actually leads to the other.
Economic data are estimates and approximations. They may miss informal activity, be revised, or reflect different definitions. What we measure is not always exactly what happens in reality.
Transcript fallacy analysis
This tool labels transcript chunks with candidate rhetorical patterns (e.g. types of potential fallacies). It is experimental: labels are aids for exploration, not proof that a formal fallacy occurred. Three method families are available for comparison: rule-based heuristics, a future classifier model, and an LLM-assisted pass. Each method uses different signals, so they will often disagree — that disagreement is informative, not a bug.
Uses hand-tuned keyword and phrase detectors with simple context guards. What it uses: explicit string patterns and lightweight rules over chunk text. Strengths: transparent, reproducible, and cheap to run; good for baseline coverage and debugging. Weaknesses: English-centric, brittle to paraphrase, and cannot capture full argument structure. Language support: English only for now; Persian heuristics are scaffolded in the codebase but not executed, so Persian transcripts get analysis_supported=false with an explicit note. When it may fail: sarcasm, code-switching, implicit premises, or valid rhetoric that resembles a pattern. It may disagree with the LLM because the LLM infers intent and paraphrase while heuristics only match surface cues.
Will use a supervised or embedding-based model trained on labeled examples (or similar signals) rather than fixed keyword lists. What it uses: learned weights from data — typically text features or dense embeddings — mapped to fallacy categories. Strengths: can generalize beyond exact phrases if training matches the domain. Weaknesses: depends on label quality, class balance, and domain shift; errors can be opaque without careful evaluation. When it may fail: out-of-domain topics, rare phrasing, or labels that do not match how annotators defined fallacies. Not yet implemented in this app; results are disabled until the pipeline is ready.
Uses a hosted large language model with structured JSON prompts to assign labels and short rationales per chunk. What it uses: semantic reasoning over the transcript text via the model’s weights (not audio diarization). Strengths: handles varied wording and can supply explanations. Language support: for pasted transcripts, pick English or Farsi in the language control; the model uses dedicated English and Persian system prompts (same JSON schema). YouTube captions supply language automatically. Other languages may use the English prompt with an API note. Weaknesses: non-deterministic across runs, possible hallucinations, and sensitivity to prompt wording; not a substitute for domain validation. When it may fail: subtle logic, unstated assumptions, or chunks where the model overfits to keywords. It may disagree with heuristics because it interprets meaning more freely, or with a future classifier because training objectives differ.
Heuristics surface explicit cues; classifiers optimize for training distributions; LLMs infer loosely from natural language. The same chunk might trigger a rule, score high in a model, and be rejected by an LLM — or the reverse. Treat disagreement as a signal to read the source text, not as proof that one method is “right.” Compare outputs cautiously and avoid using any single method alone for high-stakes conclusions.