Loading...
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 | Explanation of the Linux-Kernel Memory Consistency Model ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ :Author: Alan Stern <stern@rowland.harvard.edu> :Created: October 2017 .. Contents 1. INTRODUCTION 2. BACKGROUND 3. A SIMPLE EXAMPLE 4. A SELECTION OF MEMORY MODELS 5. ORDERING AND CYCLES 6. EVENTS 7. THE PROGRAM ORDER RELATION: po AND po-loc 8. A WARNING 9. DEPENDENCY RELATIONS: data, addr, and ctrl 10. THE READS-FROM RELATION: rf, rfi, and rfe 11. CACHE COHERENCE AND THE COHERENCE ORDER RELATION: co, coi, and coe 12. THE FROM-READS RELATION: fr, fri, and fre 13. AN OPERATIONAL MODEL 14. PROPAGATION ORDER RELATION: cumul-fence 15. DERIVATION OF THE LKMM FROM THE OPERATIONAL MODEL 16. SEQUENTIAL CONSISTENCY PER VARIABLE 17. ATOMIC UPDATES: rmw 18. THE PRESERVED PROGRAM ORDER RELATION: ppo 19. AND THEN THERE WAS ALPHA 20. THE HAPPENS-BEFORE RELATION: hb 21. THE PROPAGATES-BEFORE RELATION: pb 22. RCU RELATIONS: rcu-link, rcu-gp, rcu-rscsi, rcu-order, rcu-fence, and rb 23. LOCKING 24. PLAIN ACCESSES AND DATA RACES 25. ODDS AND ENDS INTRODUCTION ------------ The Linux-kernel memory consistency model (LKMM) is rather complex and obscure. This is particularly evident if you read through the linux-kernel.bell and linux-kernel.cat files that make up the formal version of the model; they are extremely terse and their meanings are far from clear. This document describes the ideas underlying the LKMM. It is meant for people who want to understand how the model was designed. It does not go into the details of the code in the .bell and .cat files; rather, it explains in English what the code expresses symbolically. Sections 2 (BACKGROUND) through 5 (ORDERING AND CYCLES) are aimed toward beginners; they explain what memory consistency models are and the basic notions shared by all such models. People already familiar with these concepts can skim or skip over them. Sections 6 (EVENTS) through 12 (THE FROM_READS RELATION) describe the fundamental relations used in many models. Starting in Section 13 (AN OPERATIONAL MODEL), the workings of the LKMM itself are covered. Warning: The code examples in this document are not written in the proper format for litmus tests. They don't include a header line, the initializations are not enclosed in braces, the global variables are not passed by pointers, and they don't have an "exists" clause at the end. Converting them to the right format is left as an exercise for the reader. BACKGROUND ---------- A memory consistency model (or just memory model, for short) is something which predicts, given a piece of computer code running on a particular kind of system, what values may be obtained by the code's load instructions. The LKMM makes these predictions for code running as part of the Linux kernel. In practice, people tend to use memory models the other way around. That is, given a piece of code and a collection of values specified for the loads, the model will predict whether it is possible for the code to run in such a way that the loads will indeed obtain the specified values. Of course, this is just another way of expressing the same idea. For code running on a uniprocessor system, the predictions are easy: Each load instruction must obtain the value written by the most recent store instruction accessing the same location (we ignore complicating factors such as DMA and mixed-size accesses.) But on multiprocessor systems, with multiple CPUs making concurrent accesses to shared memory locations, things aren't so simple. Different architectures have differing memory models, and the Linux kernel supports a variety of architectures. The LKMM has to be fairly permissive, in the sense that any behavior allowed by one of these architectures also has to be allowed by the LKMM. A SIMPLE EXAMPLE ---------------- Here is a simple example to illustrate the basic concepts. Consider some code running as part of a device driver for an input device. The driver might contain an interrupt handler which collects data from the device, stores it in a buffer, and sets a flag to indicate the buffer is full. Running concurrently on a different CPU might be a part of the driver code being executed by a process in the midst of a read(2) system call. This code tests the flag to see whether the buffer is ready, and if it is, copies the data back to userspace. The buffer and the flag are memory locations shared between the two CPUs. We can abstract out the important pieces of the driver code as follows (the reason for using WRITE_ONCE() and READ_ONCE() instead of simple assignment statements is discussed later): int buf = 0, flag = 0; P0() { WRITE_ONCE(buf, 1); WRITE_ONCE(flag, 1); } P1() { int r1; int r2 = 0; r1 = READ_ONCE(flag); if (r1) r2 = READ_ONCE(buf); } Here the P0() function represents the interrupt handler running on one CPU and P1() represents the read() routine running on another. The value 1 stored in buf represents input data collected from the device. Thus, P0 stores the data in buf and then sets flag. Meanwhile, P1 reads flag into the private variable r1, and if it is set, reads the data from buf into a second private variable r2 for copying to userspace. (Presumably if flag is not set then the driver will wait a while and try again.) This pattern of memory accesses, where one CPU stores values to two shared memory locations and another CPU loads from those locations in the opposite order, is widely known as the "Message Passing" or MP pattern. It is typical of memory access patterns in the kernel. Please note that this example code is a simplified abstraction. Real buffers are usually larger than a single integer, real device drivers usually use sleep and wakeup mechanisms rather than polling for I/O completion, and real code generally doesn't bother to copy values into private variables before using them. All that is beside the point; the idea here is simply to illustrate the overall pattern of memory accesses by the CPUs. A memory model will predict what values P1 might obtain for its loads from flag and buf, or equivalently, what values r1 and r2 might end up with after the code has finished running. Some predictions are trivial. For instance, no sane memory model would predict that r1 = 42 or r2 = -7, because neither of those values ever gets stored in flag or buf. Some nontrivial predictions are nonetheless quite simple. For instance, P1 might run entirely before P0 begins, in which case r1 and r2 will both be 0 at the end. Or P0 might run entirely before P1 begins, in which case r1 and r2 will both be 1. The interesting predictions concern what might happen when the two routines run concurrently. One possibility is that P1 runs after P0's store to buf but before the store to flag. In this case, r1 and r2 will again both be 0. (If P1 had been designed to read buf unconditionally then we would instead have r1 = 0 and r2 = 1.) However, the most interesting possibility is where r1 = 1 and r2 = 0. If this were to occur it would mean the driver contains a bug, because incorrect data would get sent to the user: 0 instead of 1. As it happens, the LKMM does predict this outcome can occur, and the example driver code shown above is indeed buggy. A SELECTION OF MEMORY MODELS ---------------------------- The first widely cited memory model, and the simplest to understand, is Sequential Consistency. According to this model, systems behave as if each CPU executed its instructions in order but with unspecified timing. In other words, the instructions from the various CPUs get interleaved in a nondeterministic way, always according to some single global order that agrees with the order of the instructions in the program source for each CPU. The model says that the value obtained by each load is simply the value written by the most recently executed store to the same memory location, from any CPU. For the MP example code shown above, Sequential Consistency predicts that the undesired result r1 = 1, r2 = 0 cannot occur. The reasoning goes like this: Since r1 = 1, P0 must store 1 to flag before P1 loads 1 from it, as loads can obtain values only from earlier stores. P1 loads from flag before loading from buf, since CPUs execute their instructions in order. P1 must load 0 from buf before P0 stores 1 to it; otherwise r2 would be 1 since a load obtains its value from the most recent store to the same address. P0 stores 1 to buf before storing 1 to flag, since it executes its instructions in order. Since an instruction (in this case, P0's store to flag) cannot execute before itself, the specified outcome is impossible. However, real computer hardware almost never follows the Sequential Consistency memory model; doing so would rule out too many valuable performance optimizations. On ARM and PowerPC architectures, for instance, the MP example code really does sometimes yield r1 = 1 and r2 = 0. x86 and SPARC follow yet a different memory model: TSO (Total Store Ordering). This model predicts that the undesired outcome for the MP pattern cannot occur, but in other respects it differs from Sequential Consistency. One example is the Store Buffer (SB) pattern, in which each CPU stores to its own shared location and then loads from the other CPU's location: int x = 0, y = 0; P0() { int r0; WRITE_ONCE(x, 1); r0 = READ_ONCE(y); } P1() { int r1; WRITE_ONCE(y, 1); r1 = READ_ONCE(x); } Sequential Consistency predicts that the outcome r0 = 0, r1 = 0 is impossible. (Exercise: Figure out the reasoning.) But TSO allows this outcome to occur, and in fact it does sometimes occur on x86 and SPARC systems. The LKMM was inspired by the memory models followed by PowerPC, ARM, x86, Alpha, and other architectures. However, it is different in detail from each of them. ORDERING AND CYCLES ------------------- Memory models are all about ordering. Often this is temporal ordering (i.e., the order in which certain events occur) but it doesn't have to be; consider for example the order of instructions in a program's source code. We saw above that Sequential Consistency makes an important assumption that CPUs execute instructions in the same order as those instructions occur in the code, and there are many other instances of ordering playing central roles in memory models. The counterpart to ordering is a cycle. Ordering rules out cycles: It's not possible to have X ordered before Y, Y ordered before Z, and Z ordered before X, because this would mean that X is ordered before itself. The analysis of the MP example under Sequential Consistency involved just such an impossible cycle: W: P0 stores 1 to flag executes before X: P1 loads 1 from flag executes before Y: P1 loads 0 from buf executes before Z: P0 stores 1 to buf executes before W: P0 stores 1 to flag. In short, if a memory model requires certain accesses to be ordered, and a certain outcome for the loads in a piece of code can happen only if those accesses would form a cycle, then the memory model predicts that outcome cannot occur. The LKMM is defined largely in terms of cycles, as we will see. EVENTS ------ The LKMM does not work directly with the C statements that make up kernel source code. Instead it considers the effects of those statements in a more abstract form, namely, events. The model includes three types of events: Read events correspond to loads from shared memory, such as calls to READ_ONCE(), smp_load_acquire(), or rcu_dereference(). Write events correspond to stores to shared memory, such as calls to WRITE_ONCE(), smp_store_release(), or atomic_set(). Fence events correspond to memory barriers (also known as fences), such as calls to smp_rmb() or rcu_read_lock(). These categories are not exclusive; a read or write event can also be a fence. This happens with functions like smp_load_acquire() or spin_lock(). However, no single event can be both a read and a write. Atomic read-modify-write accesses, such as atomic_inc() or xchg(), correspond to a pair of events: a read followed by a write. (The write event is omitted for executions where it doesn't occur, such as a cmpxchg() where the comparison fails.) Other parts of the code, those which do not involve interaction with shared memory, do not give rise to events. Thus, arithmetic and logical computations, control-flow instructions, or accesses to private memory or CPU registers are not of central interest to the memory model. They only affect the model's predictions indirectly. For example, an arithmetic computation might determine the value that gets stored to a shared memory location (or in the case of an array index, the address where the value gets stored), but the memory model is concerned only with the store itself -- its value and its address -- not the computation leading up to it. Events in the LKMM can be linked by various relations, which we will describe in the following sections. The memory model requires certain of these relations to be orderings, that is, it requires them not to have any cycles. THE PROGRAM ORDER RELATION: po AND po-loc ----------------------------------------- The most important relation between events is program order (po). You can think of it as the order in which statements occur in the source code after branches are taken into account and loops have been unrolled. A better description might be the order in which instructions are presented to a CPU's execution unit. Thus, we say that X is po-before Y (written as "X ->po Y" in formulas) if X occurs before Y in the instruction stream. This is inherently a single-CPU relation; two instructions executing on different CPUs are never linked by po. Also, it is by definition an ordering so it cannot have any cycles. po-loc is a sub-relation of po. It links two memory accesses when the first comes before the second in program order and they access the same memory location (the "-loc" suffix). Although this may seem straightforward, there is one subtle aspect to program order we need to explain. The LKMM was inspired by low-level architectural memory models which describe the behavior of machine code, and it retains their outlook to a considerable extent. The read, write, and fence events used by the model are close in spirit to individual machine instructions. Nevertheless, the LKMM describes kernel code written in C, and the mapping from C to machine code can be extremely complex. Optimizing compilers have great freedom in the way they translate source code to object code. They are allowed to apply transformations that add memory accesses, eliminate accesses, combine them, split them into pieces, or move them around. The use of READ_ONCE(), WRITE_ONCE(), or one of the other atomic or synchronization primitives prevents a large number of compiler optimizations. In particular, it is guaranteed that the compiler will not remove such accesses from the generated code (unless it can prove the accesses will never be executed), it will not change the order in which they occur in the code (within limits imposed by the C standard), and it will not introduce extraneous accesses. The MP and SB examples above used READ_ONCE() and WRITE_ONCE() rather than ordinary memory accesses. Thanks to this usage, we can be certain that in the MP example, the compiler won't reorder P0's write event to buf and P0's write event to flag, and similarly for the other shared memory accesses in the examples. Since private variables are not shared between CPUs, they can be accessed normally without READ_ONCE() or WRITE_ONCE(). In fact, they need not even be stored in normal memory at all -- in principle a private variable could be stored in a CPU register (hence the convention that these variables have names starting with the letter 'r'). A WARNING --------- The protections provided by READ_ONCE(), WRITE_ONCE(), and others are not perfect; and under some circumstances it is possible for the compiler to undermine the memory model. Here is an example. Suppose both branches of an "if" statement store the same value to the same location: r1 = READ_ONCE(x); if (r1) { WRITE_ONCE(y, 2); ... /* do something */ } else { WRITE_ONCE(y, 2); ... /* do something else */ } For this code, the LKMM predicts that the load from x will always be executed before either of the stores to y. However, a compiler could lift the stores out of the conditional, transforming the code into something resembling: r1 = READ_ONCE(x); WRITE_ONCE(y, 2); if (r1) { ... /* do something */ } else { ... /* do something else */ } Given this version of the code, the LKMM would predict that the load from x could be executed after the store to y. Thus, the memory model's original prediction could be invalidated by the compiler. Another issue arises from the fact that in C, arguments to many operators and function calls can be evaluated in any order. For example: r1 = f(5) + g(6); The object code might call f(5) either before or after g(6); the memory model cannot assume there is a fixed program order relation between them. (In fact, if the function calls are inlined then the compiler might even interleave their object code.) DEPENDENCY RELATIONS: data, addr, and ctrl ------------------------------------------ We say that two events are linked by a dependency relation when the execution of the second event depends in some way on a value obtained from memory by the first. The first event must be a read, and the value it obtains must somehow affect what the second event does. There are three kinds of dependencies: data, address (addr), and control (ctrl). A read and a write event are linked by a data dependency if the value obtained by the read affects the value stored by the write. As a very simple example: int x, y; r1 = READ_ONCE(x); WRITE_ONCE(y, r1 + 5); The value stored by the WRITE_ONCE obviously depends on the value loaded by the READ_ONCE. Such dependencies can wind through arbitrarily complicated computations, and a write can depend on the values of multiple reads. A read event and another memory access event are linked by an address dependency if the value obtained by the read affects the location accessed by the other event. The second event can be either a read or a write. Here's another simple example: int a[20]; int i; r1 = READ_ONCE(i); r2 = READ_ONCE(a[r1]); Here the location accessed by the second READ_ONCE() depends on the index value loaded by the first. Pointer indirection also gives rise to address dependencies, since the address of a location accessed through a pointer will depend on the value read earlier from that pointer. Finally, a read event X and a write event Y are linked by a control dependency if Y syntactically lies within an arm of an if statement and X affects the evaluation of the if condition via a data or address dependency (or similarly for a switch statement). Simple example: int x, y; r1 = READ_ONCE(x); if (r1) WRITE_ONCE(y, 1984); Execution of the WRITE_ONCE() is controlled by a conditional expression which depends on the value obtained by the READ_ONCE(); hence there is a control dependency from the load to the store. It should be pretty obvious that events can only depend on reads that come earlier in program order. Symbolically, if we have R ->data X, R ->addr X, or R ->ctrl X (where R is a read event), then we must also have R ->po X. It wouldn't make sense for a computation to depend somehow on a value that doesn't get loaded from shared memory until later in the code! Here's a trick question: When is a dependency not a dependency? Answer: When it is purely syntactic rather than semantic. We say a dependency between two accesses is purely syntactic if the second access doesn't actually depend on the result of the first. Here is a trivial example: r1 = READ_ONCE(x); WRITE_ONCE(y, r1 * 0); There appears to be a data dependency from the load of x to the store of y, since the value to be stored is computed from the value that was loaded. But in fact, the value stored does not really depend on anything since it will always be 0. Thus the data dependency is only syntactic (it appears to exist in the code) but not semantic (the second access will always be the same, regardless of the value of the first access). Given code like this, a compiler could simply discard the value returned by the load from x, which would certainly destroy any dependency. (The compiler is not permitted to eliminate entirely the load generated for a READ_ONCE() -- that's one of the nice properties of READ_ONCE() -- but it is allowed to ignore the load's value.) It's natural to object that no one in their right mind would write code like the above. However, macro expansions can easily give rise to this sort of thing, in ways that often are not apparent to the programmer. Another mechanism that can lead to purely syntactic dependencies is related to the notion of "undefined behavior". Certain program behaviors are called "undefined" in the C language specification, which means that when they occur there are no guarantees at all about the outcome. Consider the following example: int a[1]; int i; r1 = READ_ONCE(i); r2 = READ_ONCE(a[r1]); Access beyond the end or before the beginning of an array is one kind of undefined behavior. Therefore the compiler doesn't have to worry about what will happen if r1 is nonzero, and it can assume that r1 will always be zero regardless of the value actually loaded from i. (If the assumption turns out to be wrong the resulting behavior will be undefined anyway, so the compiler doesn't care!) Thus the value from the load can be discarded, breaking the address dependency. The LKMM is unaware that purely syntactic dependencies are different from semantic dependencies and therefore mistakenly predicts that the accesses in the two examples above will be ordered. This is another example of how the compiler can undermine the memory model. Be warned. THE READS-FROM RELATION: rf, rfi, and rfe ----------------------------------------- The reads-from relation (rf) links a write event to a read event when the value loaded by the read is the value that was stored by the write. In colloquial terms, the load "reads from" the store. We write W ->rf R to indicate that the load R reads from the store W. We further distinguish the cases where the load and the store occur on the same CPU (internal reads-from, or rfi) and where they occur on different CPUs (external reads-from, or rfe). For our purposes, a memory location's initial value is treated as though it had been written there by an imaginary initial store that executes on a separate CPU before the main program runs. Usage of the rf relation implicitly assumes that loads will always read from a single store. It doesn't apply properly in the presence of load-tearing, where a load obtains some of its bits from one store and some of them from another store. Fortunately, use of READ_ONCE() and WRITE_ONCE() will prevent load-tearing; it's not possible to have: int x = 0; P0() { WRITE_ONCE(x, 0x1234); } P1() { int r1; r1 = READ_ONCE(x); } and end up with r1 = 0x1200 (partly from x's initial value and partly from the value stored by P0). On the other hand, load-tearing is unavoidable when mixed-size accesses are used. Consider this example: union { u32 w; u16 h[2]; } x; P0() { WRITE_ONCE(x.h[0], 0x1234); WRITE_ONCE(x.h[1], 0x5678); } P1() { int r1; r1 = READ_ONCE(x.w); } If r1 = 0x56781234 (little-endian!) at the end, then P1 must have read from both of P0's stores. It is possible to handle mixed-size and unaligned accesses in a memory model, but the LKMM currently does not attempt to do so. It requires all accesses to be properly aligned and of the location's actual size. CACHE COHERENCE AND THE COHERENCE ORDER RELATION: co, coi, and coe ------------------------------------------------------------------ Cache coherence is a general principle requiring that in a multi-processor system, the CPUs must share a consistent view of the memory contents. Specifically, it requires that for each location in shared memory, the stores to that location must form a single global ordering which all the CPUs agree on (the coherence order), and this ordering must be consistent with the program order for accesses to that location. To put it another way, for any variable x, the coherence order (co) of the stores to x is simply the order in which the stores overwrite one another. The imaginary store which establishes x's initial value comes first in the coherence order; the store which directly overwrites the initial value comes second; the store which overwrites that value comes third, and so on. You can think of the coherence order as being the order in which the stores reach x's location in memory (or if you prefer a more hardware-centric view, the order in which the stores get written to x's cache line). We write W ->co W' if W comes before W' in the coherence order, that is, if the value stored by W gets overwritten, directly or indirectly, by the value stored by W'. Coherence order is required to be consistent with program order. This requirement takes the form of four coherency rules: Write-write coherence: If W ->po-loc W' (i.e., W comes before W' in program order and they access the same location), where W and W' are two stores, then W ->co W'. Write-read coherence: If W ->po-loc R, where W is a store and R is a load, then R must read from W or from some other store which comes after W in the coherence order. Read-write coherence: If R ->po-loc W, where R is a load and W is a store, then the store which R reads from must come before W in the coherence order. Read-read coherence: If R ->po-loc R', where R and R' are two loads, then either they read from the same store or else the store read by R comes before the store read by R' in the coherence order. This is sometimes referred to as sequential consistency per variable, because it means that the accesses to any single memory location obey the rules of the Sequential Consistency memory model. (According to Wikipedia, sequential consistency per variable and cache coherence mean the same thing except that cache coherence includes an extra requirement that every store eventually becomes visible to every CPU.) Any reasonable memory model will include cache coherence. Indeed, our expectation of cache coherence is so deeply ingrained that violations of its requirements look more like hardware bugs than programming errors: int x; P0() { WRITE_ONCE(x, 17); WRITE_ONCE(x, 23); } If the final value stored in x after this code ran was 17, you would think your computer was broken. It would be a violation of the write-write coherence rule: Since the store of 23 comes later in program order, it must also come later in x's coherence order and thus must overwrite the store of 17. int x = 0; P0() { int r1; r1 = READ_ONCE(x); WRITE_ONCE(x, 666); } If r1 = 666 at the end, this would violate the read-write coherence rule: The READ_ONCE() load comes before the WRITE_ONCE() store in program order, so it must not read from that store but rather from one coming earlier in the coherence order (in this case, x's initial value). int x = 0; P0() { WRITE_ONCE(x, 5); } P1() { int r1, r2; r1 = READ_ONCE(x); r2 = READ_ONCE(x); } If r1 = 5 (reading from P0's store) and r2 = 0 (reading from the imaginary store which establishes x's initial value) at the end, this would violate the read-read coherence rule: The r1 load comes before the r2 load in program order, so it must not read from a store that comes later in the coherence order. (As a minor curiosity, if this code had used normal loads instead of READ_ONCE() in P1, on Itanium it sometimes could end up with r1 = 5 and r2 = 0! This results from parallel execution of the operations encoded in Itanium's Very-Long-Instruction-Word format, and it is yet another motivation for using READ_ONCE() when accessing shared memory locations.) Just like the po relation, co is inherently an ordering -- it is not possible for a store to directly or indirectly overwrite itself! And just like with the rf relation, we distinguish between stores that occur on the same CPU (internal coherence order, or coi) and stores that occur on different CPUs (external coherence order, or coe). On the other hand, stores to different memory locations are never related by co, just as instructions on different CPUs are never related by po. Coherence order is strictly per-location, or if you prefer, each location has its own independent coherence order. THE FROM-READS RELATION: fr, fri, and fre ----------------------------------------- The from-reads relation (fr) can be a little difficult for people to grok. It describes the situation where a load reads a value that gets overwritten by a store. In other words, we have R ->fr W when the value that R reads is overwritten (directly or indirectly) by W, or equivalently, when R reads from a store which comes earlier than W in the coherence order. For example: int x = 0; P0() { int r1; r1 = READ_ONCE(x); WRITE_ONCE(x, 2); } The value loaded from x will be 0 (assuming cache coherence!), and it gets overwritten by the value 2. Thus there is an fr link from the READ_ONCE() to the WRITE_ONCE(). If the code contained any later stores to x, there would also be fr links from the READ_ONCE() to them. As with rf, rfi, and rfe, we subdivide the fr relation into fri (when the load and the store are on the same CPU) and fre (when they are on different CPUs). Note that the fr relation is determined entirely by the rf and co relations; it is not independent. Given a read event R and a write event W for the same location, we will have R ->fr W if and only if the write which R reads from is co-before W. In symbols, (R ->fr W) := (there exists W' with W' ->rf R and W' ->co W). AN OPERATIONAL MODEL -------------------- The LKMM is based on various operational memory models, meaning that the models arise from an abstract view of how a computer system operates. Here are the main ideas, as incorporated into the LKMM. The system as a whole is divided into the CPUs and a memory subsystem. The CPUs are responsible for executing instructions (not necessarily in program order), and they communicate with the memory subsystem. For the most part, executing an instruction requires a CPU to perform only internal operations. However, loads, stores, and fences involve more. When CPU C executes a store instruction, it tells the memory subsystem to store a certain value at a certain location. The memory subsystem propagates the store to all the other CPUs as well as to RAM. (As a special case, we say that the store propagates to its own CPU at the time it is executed.) The memory subsystem also determines where the store falls in the location's coherence order. In particular, it must arrange for the store to be co-later than (i.e., to overwrite) any other store to the same location which has already propagated to CPU C. When a CPU executes a load instruction R, it first checks to see whether there are any as-yet unexecuted store instructions, for the same location, that come before R in program order. If there are, it uses the value of the po-latest such store as the value obtained by R, and we say that the store's value is forwarded to R. Otherwise, the CPU asks the memory subsystem for the value to load and we say that R is satisfied from memory. The memory subsystem hands back the value of the co-latest store to the location in question which has already propagated to that CPU. (In fact, the picture needs to be a little more complicated than this. CPUs have local caches, and propagating a store to a CPU really means propagating it to the CPU's local cache. A local cache can take some time to process the stores that it receives, and a store can't be used to satisfy one of the CPU's loads until it has been processed. On most architectures, the local caches process stores in First-In-First-Out order, and consequently the processing delay doesn't matter for the memory model. But on Alpha, the local caches have a partitioned design that results in non-FIFO behavior. We will discuss this in more detail later.) Note that load instructions may be executed speculatively and may be restarted under certain circumstances. The memory model ignores these premature executions; we simply say that the load executes at the final time it is forwarded or satisfied. Executing a fence (or memory barrier) instruction doesn't require a CPU to do anything special other than informing the memory subsystem about the fence. However, fences do constrain the way CPUs and the memory subsystem handle other instructions, in two respects. First, a fence forces the CPU to execute various instructions in program order. Exactly which instructions are ordered depends on the type of fence: Strong fences, including smp_mb() and synchronize_rcu(), force the CPU to execute all po-earlier instructions before any po-later instructions; smp_rmb() forces the CPU to execute all po-earlier loads before any po-later loads; smp_wmb() forces the CPU to execute all po-earlier stores before any po-later stores; Acquire fences, such as smp_load_acquire(), force the CPU to execute the load associated with the fence (e.g., the load part of an smp_load_acquire()) before any po-later instructions; Release fences, such as smp_store_release(), force the CPU to execute all po-earlier instructions before the store associated with the fence (e.g., the store part of an smp_store_release()). Second, some types of fence affect the way the memory subsystem propagates stores. When a fence instruction is executed on CPU C: For each other CPU C', smp_wmb() forces all po-earlier stores on C to propagate to C' before any po-later stores do. For each other CPU C', any store which propagates to C before a release fence is executed (including all po-earlier stores executed on C) is forced to propagate to C' before the store associated with the release fence does. Any store which propagates to C before a strong fence is executed (including all po-earlier stores on C) is forced to propagate to all other CPUs before any instructions po-after the strong fence are executed on C. The propagation ordering enforced by release fences and strong fences affects stores from other CPUs that propagate to CPU C before the fence is executed, as well as stores that are executed on C before the fence. We describe this property by saying that release fences and strong fences are A-cumulative. By contrast, smp_wmb() fences are not A-cumulative; they only affect the propagation of stores that are executed on C before the fence (i.e., those which precede the fence in program order). rcu_read_lock(), rcu_read_unlock(), and synchronize_rcu() fences have other properties which we discuss later. PROPAGATION ORDER RELATION: cumul-fence --------------------------------------- The fences which affect propagation order (i.e., strong, release, and smp_wmb() fences) are collectively referred to as cumul-fences, even though smp_wmb() isn't A-cumulative. The cumul-fence relation is defined to link memory access events E and F whenever: E and F are both stores on the same CPU and an smp_wmb() fence event occurs between them in program order; or F is a release fence and some X comes before F in program order, where either X = E or else E ->rf X; or A strong fence event occurs between some X and F in program order, where either X = E or else E ->rf X. The operational model requires that whenever W and W' are both stores and W ->cumul-fence W', then W must propagate to any given CPU before W' does. However, for different CPUs C and C', it does not require W to propagate to C before W' propagates to C'. DERIVATION OF THE LKMM FROM THE OPERATIONAL MODEL ------------------------------------------------- The LKMM is derived from the restrictions imposed by the design outlined above. These restrictions involve the necessity of maintaining cache coherence and the fact that a CPU can't operate on a value before it knows what that value is, among other things. The formal version of the LKMM is defined by six requirements, or axioms: Sequential consistency per variable: This requires that the system obey the four coherency rules. Atomicity: This requires that atomic read-modify-write operations really are atomic, that is, no other stores can sneak into the middle of such an update. Happens-before: This requires that certain instructions are executed in a specific order. Propagation: This requires that certain stores propagate to CPUs and to RAM in a specific order. Rcu: This requires that RCU read-side critical sections and grace periods obey the rules of RCU, in particular, the Grace-Period Guarantee. Plain-coherence: This requires that plain memory accesses (those not using READ_ONCE(), WRITE_ONCE(), etc.) must obey the operational model's rules regarding cache coherence. The first and second are quite common; they can be found in many memory models (such as those for C11/C++11). The "happens-before" and "propagation" axioms have analogs in other memory models as well. The "rcu" and "plain-coherence" axioms are specific to the LKMM. Each of these axioms is discussed below. SEQUENTIAL CONSISTENCY PER VARIABLE ----------------------------------- According to the principle of cache coherence, the stores to any fixed shared location in memory form a global ordering. We can imagine inserting the loads from that location into this ordering, by placing each load between the store that it reads from and the following store. This leaves the relative positions of loads that read from the same store unspecified; let's say they are inserted in program order, first for CPU 0, then CPU 1, etc. You can check that the four coherency rules imply that the rf, co, fr, and po-loc relations agree with this global ordering; in other words, whenever we have X ->rf Y or X ->co Y or X ->fr Y or X ->po-loc Y, the X event comes before the Y event in the global ordering. The LKMM's "coherence" axiom expresses this by requiring the union of these relations not to have any cycles. This means it must not be possible to find events X0 -> X1 -> X2 -> ... -> Xn -> X0, where each of the links is either rf, co, fr, or po-loc. This has to hold if the accesses to the fixed memory location can be ordered as cache coherence demands. Although it is not obvious, it can be shown that the converse is also true: This LKMM axiom implies that the four coherency rules are obeyed. ATOMIC UPDATES: rmw ------------------- What does it mean to say that a read-modify-write (rmw) update, such as atomic_inc(&x), is atomic? It means that the memory location (x in this case) does not get altered between the read and the write events making up the atomic operation. In particular, if two CPUs perform atomic_inc(&x) concurrently, it must be guaranteed that the final value of x will be the initial value plus two. We should never have the following sequence of events: CPU 0 loads x obtaining 13; CPU 1 loads x obtaining 13; CPU 0 stores 14 to x; CPU 1 stores 14 to x; where the final value of x is wrong (14 rather than 15). In this example, CPU 0's increment effectively gets lost because it occurs in between CPU 1's load and store. To put it another way, the problem is that the position of CPU 0's store in x's coherence order is between the store that CPU 1 reads from and the store that CPU 1 performs. The same analysis applies to all atomic update operations. Therefore, to enforce atomicity the LKMM requires that atomic updates follow this rule: Whenever R and W are the read and write events composing an atomic read-modify-write and W' is the write event which R reads from, there must not be any stores coming between W' and W in the coherence order. Equivalently, (R ->rmw W) implies (there is no X with R ->fr X and X ->co W), where the rmw relation links the read and write events making up each atomic update. This is what the LKMM's "atomic" axiom says. THE PRESERVED PROGRAM ORDER RELATION: ppo ----------------------------------------- There are many situations where a CPU is obliged to execute two instructions in program order. We amalgamate them into the ppo (for "preserved program order") relation, which links the po-earlier instruction to the po-later instruction and is thus a sub-relation of po. The operational model already includes a description of one such situation: Fences are a source of ppo links. Suppose X and Y are memory accesses with X ->po Y; then the CPU must execute X before Y if any of the following hold: A strong (smp_mb() or synchronize_rcu()) fence occurs between X and Y; X and Y are both stores and an smp_wmb() fence occurs between them; X and Y are both loads and an smp_rmb() fence occurs between them; X is also an acquire fence, such as smp_load_acquire(); Y is also a release fence, such as smp_store_release(). Another possibility, not mentioned earlier but discussed in the next section, is: X and Y are both loads, X ->addr Y (i.e., there is an address dependency from X to Y), and X is a READ_ONCE() or an atomic access. Dependencies can also cause instructions to be executed in program order. This is uncontroversial when the second instruction is a store; either a data, address, or control dependency from a load R to a store W will force the CPU to execute R before W. This is very simply because the CPU cannot tell the memory subsystem about W's store before it knows what value should be stored (in the case of a data dependency), what location it should be stored into (in the case of an address dependency), or whether the store should actually take place (in the case of a control dependency). Dependencies to load instructions are more problematic. To begin with, there is no such thing as a data dependency to a load. Next, a CPU has no reason to respect a control dependency to a load, because it can always satisfy the second load speculatively before the first, and then ignore the result if it turns out that the second load shouldn't be executed after all. And lastly, the real difficulties begin when we consider address dependencies to loads. To be fair about it, all Linux-supported architectures do execute loads in program order if there is an address dependency between them. After all, a CPU cannot ask the memory subsystem to load a value from a particular location before it knows what that location is. However, the split-cache design used by Alpha can cause it to behave in a way that looks as if the loads were executed out of order (see the next section for more details). The kernel includes a workaround for this problem when the loads come from READ_ONCE(), and therefore the LKMM includes address dependencies to loads in the ppo relation. On the other hand, dependencies can indirectly affect the ordering of two loads. This happens when there is a dependency from a load to a store and a second, po-later load reads from that store: R ->dep W ->rfi R', where the dep link can be either an address or a data dependency. In this situation we know it is possible for the CPU to execute R' before W, because it can forward the value that W will store to R'. But it cannot execute R' before R, because it cannot forward the value before it knows what that value is, or that W and R' do access the same location. However, if there is merely a control dependency between R and W then the CPU can speculatively forward W to R' before executing R; if the speculation turns out to be wrong then the CPU merely has to restart or abandon R'. (In theory, a CPU might forward a store to a load when it runs across an address dependency like this: r1 = READ_ONCE(ptr); WRITE_ONCE(*r1, 17); r2 = READ_ONCE(*r1); because it could tell that the store and the second load access the same location even before it knows what the location's address is. However, none of the architectures supported by the Linux kernel do this.) Two memory accesses of the same location must always be executed in program order if the second access is a store. Thus, if we have R ->po-loc W (the po-loc link says that R comes before W in program order and they access the same location), the CPU is obliged to execute W after R. If it executed W first then the memory subsystem would respond to R's read request with the value stored by W (or an even later store), in violation of the read-write coherence rule. Similarly, if we had W ->po-loc W' and the CPU executed W' before W, then the memory subsystem would put W' before W in the coherence order. It would effectively cause W to overwrite W', in violation of the write-write coherence rule. (Interestingly, an early ARMv8 memory model, now obsolete, proposed allowing out-of-order writes like this to occur. The model avoided violating the write-write coherence rule by requiring the CPU not to send the W write to the memory subsystem at all!) AND THEN THERE WAS ALPHA ------------------------ As mentioned above, the Alpha architecture is unique in that it does not appear to respect address dependencies to loads. This means that code such as the following: int x = 0; int y = -1; int *ptr = &y; P0() { WRITE_ONCE(x, 1); smp_wmb(); WRITE_ONCE(ptr, &x); } P1() { int *r1; int r2; r1 = ptr; r2 = READ_ONCE(*r1); } can malfunction on Alpha systems (notice that P1 uses an ordinary load to read ptr instead of READ_ONCE()). It is quite possible that r1 = &x and r2 = 0 at the end, in spite of the address dependency. At first glance this doesn't seem to make sense. We know that the smp_wmb() forces P0's store to x to propagate to P1 before the store to ptr does. And since P1 can't execute its second load until it knows what location to load from, i.e., after executing its first load, the value x = 1 must have propagated to P1 before the second load executed. So why doesn't r2 end up equal to 1? The answer lies in the Alpha's split local caches. Although the two stores do reach P1's local cache in the proper order, it can happen that the first store is processed by a busy part of the cache while the second store is processed by an idle part. As a result, the x = 1 value may not become available for P1's CPU to read until after the ptr = &x value does, leading to the undesirable result above. The final effect is that even though the two loads really are executed in program order, it appears that they aren't. This could not have happened if the local cache had processed the incoming stores in FIFO order. By contrast, other architectures maintain at least the appearance of FIFO order. In practice, this difficulty is solved by inserting a special fence between P1's two loads when the kernel is compiled for the Alpha architecture. In fact, as of version 4.15, the kernel automatically adds this fence after every READ_ONCE() and atomic load on Alpha. The effect of the fence is to cause the CPU not to execute any po-later instructions until after the local cache has finished processing all the stores it has already received. Thus, if the code was changed to: P1() { int *r1; int r2; r1 = READ_ONCE(ptr); r2 = READ_ONCE(*r1); } then we would never get r1 = &x and r2 = 0. By the time P1 executed its second load, the x = 1 store would already be fully processed by the local cache and available for satisfying the read request. Thus we have yet another reason why shared data should always be read with READ_ONCE() or another synchronization primitive rather than accessed directly. The LKMM requires that smp_rmb(), acquire fences, and strong fences share this property: They do not allow the CPU to execute any po-later instructions (or po-later loads in the case of smp_rmb()) until all outstanding stores have been processed by the local cache. In the case of a strong fence, the CPU first has to wait for all of its po-earlier stores to propagate to every other CPU in the system; then it has to wait for the local cache to process all the stores received as of that time -- not just the stores received when the strong fence began. And of course, none of this matters for any architecture other than Alpha. THE HAPPENS-BEFORE RELATION: hb ------------------------------- The happens-before relation (hb) links memory accesses that have to execute in a certain order. hb includes the ppo relation and two others, one of which is rfe. W ->rfe R implies that W and R are on different CPUs. It also means that W's store must have propagated to R's CPU before R executed; otherwise R could not have read the value stored by W. Therefore W must have executed before R, and so we have W ->hb R. The equivalent fact need not hold if W ->rfi R (i.e., W and R are on the same CPU). As we have already seen, the operational model allows W's value to be forwarded to R in such cases, meaning that R may well execute before W does. It's important to understand that neither coe nor fre is included in hb, despite their similarities to rfe. For example, suppose we have W ->coe W'. This means that W and W' are stores to the same location, they execute on different CPUs, and W comes before W' in the coherence order (i.e., W' overwrites W). Nevertheless, it is possible for W' to execute before W, because the decision as to which store overwrites the other is made later by the memory subsystem. When the stores are nearly simultaneous, either one can come out on top. Similarly, R ->fre W means that W overwrites the value which R reads, but it doesn't mean that W has to execute after R. All that's necessary is for the memory subsystem not to propagate W to R's CPU until after R has executed, which is possible if W executes shortly before R. The third relation included in hb is like ppo, in that it only links events that are on the same CPU. However it is more difficult to explain, because it arises only indirectly from the requirement of cache coherence. The relation is called prop, and it links two events on CPU C in situations where a store from some other CPU comes after the first event in the coherence order and propagates to C before the second event executes. This is best explained with some examples. The simplest case looks like this: int x; P0() { int r1; WRITE_ONCE(x, 1); r1 = READ_ONCE(x); } P1() { WRITE_ONCE(x, 8); } If r1 = 8 at the end then P0's accesses must have executed in program order. We can deduce this from the operational model; if P0's load had executed before its store then the value of the store would have been forwarded to the load, so r1 would have ended up equal to 1, not 8. In this case there is a prop link from P0's write event to its read event, because P1's store came after P0's store in x's coherence order, and P1's store propagated to P0 before P0's load executed. An equally simple case involves two loads of the same location that read from different stores: int x = 0; P0() { int r1, r2; r1 = READ_ONCE(x); r2 = READ_ONCE(x); } P1() { WRITE_ONCE(x, 9); } If r1 = 0 and r2 = 9 at the end then P0's accesses must have executed in program order. If the second load had executed before the first then the x = 9 store must have been propagated to P0 before the first load executed, and so r1 would have been 9 rather than 0. In this case there is a prop link from P0's first read event to its second, because P1's store overwrote the value read by P0's first load, and P1's store propagated to P0 before P0's second load executed. Less trivial examples of prop all involve fences. Unlike the simple examples above, they can require that some instructions are executed out of program order. This next one should look familiar: int buf = 0, flag = 0; P0() { WRITE_ONCE(buf, 1); smp_wmb(); WRITE_ONCE(flag, 1); } P1() { int r1; int r2; r1 = READ_ONCE(flag); r2 = READ_ONCE(buf); } This is the MP pattern again, with an smp_wmb() fence between the two stores. If r1 = 1 and r2 = 0 at the end then there is a prop link from P1's second load to its first (backwards!). The reason is similar to the previous examples: The value P1 loads from buf gets overwritten by P0's store to buf, the fence guarantees that the store to buf will propagate to P1 before the store to flag does, and the store to flag propagates to P1 before P1 reads flag. The prop link says that in order to obtain the r1 = 1, r2 = 0 result, P1 must execute its second load before the first. Indeed, if the load from flag were executed first, then the buf = 1 store would already have propagated to P1 by the time P1's load from buf executed, so r2 would have been 1 at the end, not 0. (The reasoning holds even for Alpha, although the details are more complicated and we will not go into them.) But what if we put an smp_rmb() fence between P1's loads? The fence would force the two loads to be executed in program order, and it would generate a cycle in the hb relation: The fence would create a ppo link (hence an hb link) from the first load to the second, and the prop relation would give an hb link from the second load to the first. Since an instruction can't execute before itself, we are forced to conclude that if an smp_rmb() fence is added, the r1 = 1, r2 = 0 outcome is impossible -- as it should be. The formal definition of the prop relation involves a coe or fre link, followed by an arbitrary number of cumul-fence links, ending with an rfe link. You can concoct more exotic examples, containing more than one fence, although this quickly leads to diminishing returns in terms of complexity. For instance, here's an example containing a coe link followed by two cumul-fences and an rfe link, utilizing the fact that release fences are A-cumulative: int x, y, z; P0() { int r0; WRITE_ONCE(x, 1); r0 = READ_ONCE(z); } P1() { WRITE_ONCE(x, 2); smp_wmb(); WRITE_ONCE(y, 1); } P2() { int r2; r2 = READ_ONCE(y); smp_store_release(&z, 1); } If x = 2, r0 = 1, and r2 = 1 after this code runs then there is a prop link from P0's store to its load. This is because P0's store gets overwritten by P1's store since x = 2 at the end (a coe link), the smp_wmb() ensures that P1's store to x propagates to P2 before the store to y does (the first cumul-fence), the store to y propagates to P2 before P2's load and store execute, P2's smp_store_release() guarantees that the stores to x and y both propagate to P0 before the store to z does (the second cumul-fence), and P0's load executes after the store to z has propagated to P0 (an rfe link). In summary, the fact that the hb relation links memory access events in the order they execute means that it must not have cycles. This requirement is the content of the LKMM's "happens-before" axiom. The LKMM defines yet another relation connected to times of instruction execution, but it is not included in hb. It relies on the particular properties of strong fences, which we cover in the next section. THE PROPAGATES-BEFORE RELATION: pb ---------------------------------- The propagates-before (pb) relation capitalizes on the special features of strong fences. It links two events E and F whenever some store is coherence-later than E and propagates to every CPU and to RAM before F executes. The formal definition requires that E be linked to F via a coe or fre link, an arbitrary number of cumul-fences, an optional rfe link, a strong fence, and an arbitrary number of hb links. Let's see how this definition works out. Consider first the case where E is a store (implying that the sequence of links begins with coe). Then there are events W, X, Y, and Z such that: E ->coe W ->cumul-fence* X ->rfe? Y ->strong-fence Z ->hb* F, where the * suffix indicates an arbitrary number of links of the specified type, and the ? suffix indicates the link is optional (Y may be equal to X). Because of the cumul-fence links, we know that W will propagate to Y's CPU before X does, hence before Y executes and hence before the strong fence executes. Because this fence is strong, we know that W will propagate to every CPU and to RAM before Z executes. And because of the hb links, we know that Z will execute before F. Thus W, which comes later than E in the coherence order, will propagate to every CPU and to RAM before F executes. The case where E is a load is exactly the same, except that the first link in the sequence is fre instead of coe. The existence of a pb link from E to F implies that E must execute before F. To see why, suppose that F executed first. Then W would have propagated to E's CPU before E executed. If E was a store, the memory subsystem would then be forced to make E come after W in the coherence order, contradicting the fact that E ->coe W. If E was a load, the memory subsystem would then be forced to satisfy E's read request with the value stored by W or an even later store, contradicting the fact that E ->fre W. A good example illustrating how pb works is the SB pattern with strong fences: int x = 0, y = 0; P0() { int r0; WRITE_ONCE(x, 1); smp_mb(); r0 = READ_ONCE(y); } P1() { int r1; WRITE_ONCE(y, 1); smp_mb(); r1 = READ_ONCE(x); } If r0 = 0 at the end then there is a pb link from P0's load to P1's load: an fre link from P0's load to P1's store (which overwrites the value read by P0), and a strong fence between P1's store and its load. In this example, the sequences of cumul-fence and hb links are empty. Note that this pb link is not included in hb as an instance of prop, because it does not start and end on the same CPU. Similarly, if r1 = 0 at the end then there is a pb link from P1's load to P0's. This means that if both r1 and r2 were 0 there would be a cycle in pb, which is not possible since an instruction cannot execute before itself. Thus, adding smp_mb() fences to the SB pattern prevents the r0 = 0, r1 = 0 outcome. In summary, the fact that the pb relation links events in the order they execute means that it cannot have cycles. This requirement is the content of the LKMM's "propagation" axiom. RCU RELATIONS: rcu-link, rcu-gp, rcu-rscsi, rcu-order, rcu-fence, and rb ------------------------------------------------------------------------ RCU (Read-Copy-Update) is a powerful synchronization mechanism. It rests on two concepts: grace periods and read-side critical sections. A grace period is the span of time occupied by a call to synchronize_rcu(). A read-side critical section (or just critical section, for short) is a region of code delimited by rcu_read_lock() at the start and rcu_read_unlock() at the end. Critical sections can be nested, although we won't make use of this fact. As far as memory models are concerned, RCU's main feature is its Grace-Period Guarantee, which states that a critical section can never span a full grace period. In more detail, the Guarantee says: For any critical section C and any grace period G, at least one of the following statements must hold: (1) C ends before G does, and in addition, every store that propagates to C's CPU before the end of C must propagate to every CPU before G ends. (2) G starts before C does, and in addition, every store that propagates to G's CPU before the start of G must propagate to every CPU before C starts. In particular, it is not possible for a critical section to both start before and end after a grace period. Here is a simple example of RCU in action: int x, y; P0() { rcu_read_lock(); WRITE_ONCE(x, 1); WRITE_ONCE(y, 1); rcu_read_unlock(); } P1() { int r1, r2; r1 = READ_ONCE(x); synchronize_rcu(); r2 = READ_ONCE(y); } The Grace Period Guarantee tells us that when this code runs, it will never end with r1 = 1 and r2 = 0. The reasoning is as follows. r1 = 1 means that P0's store to x propagated to P1 before P1 called synchronize_rcu(), so P0's critical section must have started before P1's grace period, contrary to part (2) of the Guarantee. On the other hand, r2 = 0 means that P0's store to y, which occurs before the end of the critical section, did not propagate to P1 before the end of the grace period, contrary to part (1). Together the results violate the Guarantee. In the kernel's implementations of RCU, the requirements for stores to propagate to every CPU are fulfilled by placing strong fences at suitable places in the RCU-related code. Thus, if a critical section starts before a grace period does then the critical section's CPU will execute an smp_mb() fence after the end of the critical section and some time before the grace period's synchronize_rcu() call returns. And if a critical section ends after a grace period does then the synchronize_rcu() routine will execute an smp_mb() fence at its start and some time before the critical section's opening rcu_read_lock() executes. What exactly do we mean by saying that a critical section "starts before" or "ends after" a grace period? Some aspects of the meaning are pretty obvious, as in the example above, but the details aren't entirely clear. The LKMM formalizes this notion by means of the rcu-link relation. rcu-link encompasses a very general notion of "before": If E and F are RCU fence events (i.e., rcu_read_lock(), rcu_read_unlock(), or synchronize_rcu()) then among other things, E ->rcu-link F includes cases where E is po-before some memory-access event X, F is po-after some memory-access event Y, and we have any of X ->rfe Y, X ->co Y, or X ->fr Y. The formal definition of the rcu-link relation is more than a little obscure, and we won't give it here. It is closely related to the pb relation, and the details don't matter unless you want to comb through a somewhat lengthy formal proof. Pretty much all you need to know about rcu-link is the information in the preceding paragraph. The LKMM also defines the rcu-gp and rcu-rscsi relations. They bring grace periods and read-side critical sections into the picture, in the following way: E ->rcu-gp F means that E and F are in fact the same event, and that event is a synchronize_rcu() fence (i.e., a grace period). E ->rcu-rscsi F means that E and F are the rcu_read_unlock() and rcu_read_lock() fence events delimiting some read-side critical section. (The 'i' at the end of the name emphasizes that this relation is "inverted": It links the end of the critical section to the start.) If we think of the rcu-link relation as standing for an extended "before", then X ->rcu-gp Y ->rcu-link Z roughly says that X is a grace period which ends before Z begins. (In fact it covers more than this, because it also includes cases where some store propagates to Z's CPU before Z begins but doesn't propagate to some other CPU until after X ends.) Similarly, X ->rcu-rscsi Y ->rcu-link Z says that X is the end of a critical section which starts before Z begins. The LKMM goes on to define the rcu-order relation as a sequence of rcu-gp and rcu-rscsi links separated by rcu-link links, in which the number of rcu-gp links is >= the number of rcu-rscsi links. For example: X ->rcu-gp Y ->rcu-link Z ->rcu-rscsi T ->rcu-link U ->rcu-gp V would imply that X ->rcu-order V, because this sequence contains two rcu-gp links and one rcu-rscsi link. (It also implies that X ->rcu-order T and Z ->rcu-order V.) On the other hand: X ->rcu-rscsi Y ->rcu-link Z ->rcu-rscsi T ->rcu-link U ->rcu-gp V does not imply X ->rcu-order V, because the sequence contains only one rcu-gp link but two rcu-rscsi links. The rcu-order relation is important because the Grace Period Guarantee means that rcu-order links act kind of like strong fences. In particular, E ->rcu-order F implies not only that E begins before F ends, but also that any write po-before E will propagate to every CPU before any instruction po-after F can execute. (However, it does not imply that E must execute before F; in fact, each synchronize_rcu() fence event is linked to itself by rcu-order as a degenerate case.) To prove this in full generality requires some intellectual effort. We'll consider just a very simple case: G ->rcu-gp W ->rcu-link Z ->rcu-rscsi F. This formula means that G and W are the same event (a grace period), and there are events X, Y and a read-side critical section C such that: 1. G = W is po-before or equal to X; 2. X comes "before" Y in some sense (including rfe, co and fr); 3. Y is po-before Z; 4. Z is the rcu_read_unlock() event marking the end of C; 5. F is the rcu_read_lock() event marking the start of C. From 1 - 4 we deduce that the grace period G ends before the critical section C. Then part (2) of the Grace Period Guarantee says not only that G starts before C does, but also that any write which executes on G's CPU before G starts must propagate to every CPU before C starts. In particular, the write propagates to every CPU before F finishes executing and hence before any instruction po-after F can execute. This sort of reasoning can be extended to handle all the situations covered by rcu-order. The rcu-fence relation is a simple extension of rcu-order. While rcu-order only links certain fence events (calls to synchronize_rcu(), rcu_read_lock(), or rcu_read_unlock()), rcu-fence links any events that are separated by an rcu-order link. This is analogous to the way the strong-fence relation links events that are separated by an smp_mb() fence event (as mentioned above, rcu-order links act kind of like strong fences). Written symbolically, X ->rcu-fence Y means there are fence events E and F such that: X ->po E ->rcu-order F ->po Y. From the discussion above, we see this implies not only that X executes before Y, but also (if X is a store) that X propagates to every CPU before Y executes. Thus rcu-fence is sort of a "super-strong" fence: Unlike the original strong fences (smp_mb() and synchronize_rcu()), rcu-fence is able to link events on different CPUs. (Perhaps this fact should lead us to say that rcu-fence isn't really a fence at all!) Finally, the LKMM defines the RCU-before (rb) relation in terms of rcu-fence. This is done in essentially the same way as the pb relation was defined in terms of strong-fence. We will omit the details; the end result is that E ->rb F implies E must execute before F, just as E ->pb F does (and for much the same reasons). Putting this all together, the LKMM expresses the Grace Period Guarantee by requiring that the rb relation does not contain a cycle. Equivalently, this "rcu" axiom requires that there are no events E and F with E ->rcu-link F ->rcu-order E. Or to put it a third way, the axiom requires that there are no cycles consisting of rcu-gp and rcu-rscsi alternating with rcu-link, where the number of rcu-gp links is >= the number of rcu-rscsi links. Justifying the axiom isn't easy, but it is in fact a valid formalization of the Grace Period Guarantee. We won't attempt to go through the detailed argument, but the following analysis gives a taste of what is involved. Suppose both parts of the Guarantee are violated: A critical section starts before a grace period, and some store propagates to the critical section's CPU before the end of the critical section but doesn't propagate to some other CPU until after the end of the grace period. Putting symbols to these ideas, let L and U be the rcu_read_lock() and rcu_read_unlock() fence events delimiting the critical section in question, and let S be the synchronize_rcu() fence event for the grace period. Saying that the critical section starts before S means there are events Q and R where Q is po-after L (which marks the start of the critical section), Q is "before" R in the sense used by the rcu-link relation, and R is po-before the grace period S. Thus we have: L ->rcu-link S. Let W be the store mentioned above, let Y come before the end of the critical section and witness that W propagates to the critical section's CPU by reading from W, and let Z on some arbitrary CPU be a witness that W has not propagated to that CPU, where Z happens after some event X which is po-after S. Symbolically, this amounts to: S ->po X ->hb* Z ->fr W ->rf Y ->po U. The fr link from Z to W indicates that W has not propagated to Z's CPU at the time that Z executes. From this, it can be shown (see the discussion of the rcu-link relation earlier) that S and U are related by rcu-link: S ->rcu-link U. Since S is a grace period we have S ->rcu-gp S, and since L and U are the start and end of the critical section C we have U ->rcu-rscsi L. From this we obtain: S ->rcu-gp S ->rcu-link U ->rcu-rscsi L ->rcu-link S, a forbidden cycle. Thus the "rcu" axiom rules out this violation of the Grace Period Guarantee. For something a little more down-to-earth, let's see how the axiom works out in practice. Consider the RCU code example from above, this time with statement labels added: int x, y; P0() { L: rcu_read_lock(); X: WRITE_ONCE(x, 1); Y: WRITE_ONCE(y, 1); U: rcu_read_unlock(); } P1() { int r1, r2; Z: r1 = READ_ONCE(x); S: synchronize_rcu(); W: r2 = READ_ONCE(y); } If r2 = 0 at the end then P0's store at Y overwrites the value that P1's load at W reads from, so we have W ->fre Y. Since S ->po W and also Y ->po U, we get S ->rcu-link U. In addition, S ->rcu-gp S because S is a grace period. If r1 = 1 at the end then P1's load at Z reads from P0's store at X, so we have X ->rfe Z. Together with L ->po X and Z ->po S, this yields L ->rcu-link S. And since L and U are the start and end of a critical section, we have U ->rcu-rscsi L. Then U ->rcu-rscsi L ->rcu-link S ->rcu-gp S ->rcu-link U is a forbidden cycle, violating the "rcu" axiom. Hence the outcome is not allowed by the LKMM, as we would expect. For contrast, let's see what can happen in a more complicated example: int x, y, z; P0() { int r0; L0: rcu_read_lock(); r0 = READ_ONCE(x); WRITE_ONCE(y, 1); U0: rcu_read_unlock(); } P1() { int r1; r1 = READ_ONCE(y); S1: synchronize_rcu(); WRITE_ONCE(z, 1); } P2() { int r2; L2: rcu_read_lock(); r2 = READ_ONCE(z); WRITE_ONCE(x, 1); U2: rcu_read_unlock(); } If r0 = r1 = r2 = 1 at the end, then similar reasoning to before shows that U0 ->rcu-rscsi L0 ->rcu-link S1 ->rcu-gp S1 ->rcu-link U2 ->rcu-rscsi L2 ->rcu-link U0. However this cycle is not forbidden, because the sequence of relations contains fewer instances of rcu-gp (one) than of rcu-rscsi (two). Consequently the outcome is allowed by the LKMM. The following instruction timing diagram shows how it might actually occur: P0 P1 P2 -------------------- -------------------- -------------------- rcu_read_lock() WRITE_ONCE(y, 1) r1 = READ_ONCE(y) synchronize_rcu() starts . rcu_read_lock() . WRITE_ONCE(x, 1) r0 = READ_ONCE(x) . rcu_read_unlock() . synchronize_rcu() ends WRITE_ONCE(z, 1) r2 = READ_ONCE(z) rcu_read_unlock() This requires P0 and P2 to execute their loads and stores out of program order, but of course they are allowed to do so. And as you can see, the Grace Period Guarantee is not violated: The critical section in P0 both starts before P1's grace period does and ends before it does, and the critical section in P2 both starts after P1's grace period does and ends after it does. Addendum: The LKMM now supports SRCU (Sleepable Read-Copy-Update) in addition to normal RCU. The ideas involved are much the same as above, with new relations srcu-gp and srcu-rscsi added to represent SRCU grace periods and read-side critical sections. There is a restriction on the srcu-gp and srcu-rscsi links that can appear in an rcu-order sequence (the srcu-rscsi links must be paired with srcu-gp links having the same SRCU domain with proper nesting); the details are relatively unimportant. LOCKING ------- The LKMM includes locking. In fact, there is special code for locking in the formal model, added in order to make tools run faster. However, this special code is intended to be more or less equivalent to concepts we have already covered. A spinlock_t variable is treated the same as an int, and spin_lock(&s) is treated almost the same as: while (cmpxchg_acquire(&s, 0, 1) != 0) cpu_relax(); This waits until s is equal to 0 and then atomically sets it to 1, and the read part of the cmpxchg operation acts as an acquire fence. An alternate way to express the same thing would be: r = xchg_acquire(&s, 1); along with a requirement that at the end, r = 0. Similarly, spin_trylock(&s) is treated almost the same as: return !cmpxchg_acquire(&s, 0, 1); which atomically sets s to 1 if it is currently equal to 0 and returns true if it succeeds (the read part of the cmpxchg operation acts as an acquire fence only if the operation is successful). spin_unlock(&s) is treated almost the same as: smp_store_release(&s, 0); The "almost" qualifiers above need some explanation. In the LKMM, the store-release in a spin_unlock() and the load-acquire which forms the first half of the atomic rmw update in a spin_lock() or a successful spin_trylock() -- we can call these things lock-releases and lock-acquires -- have two properties beyond those of ordinary releases and acquires. First, when a lock-acquire reads from or is po-after a lock-release, the LKMM requires that every instruction po-before the lock-release must execute before any instruction po-after the lock-acquire. This would naturally hold if the release and acquire operations were on different CPUs and accessed the same lock variable, but the LKMM says it also holds when they are on the same CPU, even if they access different lock variables. For example: int x, y; spinlock_t s, t; P0() { int r1, r2; spin_lock(&s); r1 = READ_ONCE(x); spin_unlock(&s); spin_lock(&t); r2 = READ_ONCE(y); spin_unlock(&t); } P1() { WRITE_ONCE(y, 1); smp_wmb(); WRITE_ONCE(x, 1); } Here the second spin_lock() is po-after the first spin_unlock(), and therefore the load of x must execute before the load of y, even though the two locking operations use different locks. Thus we cannot have r1 = 1 and r2 = 0 at the end (this is an instance of the MP pattern). This requirement does not apply to ordinary release and acquire fences, only to lock-related operations. For instance, suppose P0() in the example had been written as: P0() { int r1, r2, r3; r1 = READ_ONCE(x); smp_store_release(&s, 1); r3 = smp_load_acquire(&s); r2 = READ_ONCE(y); } Then the CPU would be allowed to forward the s = 1 value from the smp_store_release() to the smp_load_acquire(), executing the instructions in the following order: r3 = smp_load_acquire(&s); // Obtains r3 = 1 r2 = READ_ONCE(y); r1 = READ_ONCE(x); smp_store_release(&s, 1); // Value is forwarded and thus it could load y before x, obtaining r2 = 0 and r1 = 1. Second, when a lock-acquire reads from or is po-after a lock-release, and some other stores W and W' occur po-before the lock-release and po-after the lock-acquire respectively, the LKMM requires that W must propagate to each CPU before W' does. For example, consider: int x, y; spinlock_t s; P0() { spin_lock(&s); WRITE_ONCE(x, 1); spin_unlock(&s); } P1() { int r1; spin_lock(&s); r1 = READ_ONCE(x); WRITE_ONCE(y, 1); spin_unlock(&s); } P2() { int r2, r3; r2 = READ_ONCE(y); smp_rmb(); r3 = READ_ONCE(x); } If r1 = 1 at the end then the spin_lock() in P1 must have read from the spin_unlock() in P0. Hence the store to x must propagate to P2 before the store to y does, so we cannot have r2 = 1 and r3 = 0. But if P1 had used a lock variable different from s, the writes could have propagated in either order. (On the other hand, if the code in P0 and P1 had all executed on a single CPU, as in the example before this one, then the writes would have propagated in order even if the two critical sections used different lock variables.) These two special requirements for lock-release and lock-acquire do not arise from the operational model. Nevertheless, kernel developers have come to expect and rely on them because they do hold on all architectures supported by the Linux kernel, albeit for various differing reasons. PLAIN ACCESSES AND DATA RACES ----------------------------- In the LKMM, memory accesses such as READ_ONCE(x), atomic_inc(&y), smp_load_acquire(&z), and so on are collectively referred to as "marked" accesses, because they are all annotated with special operations of one kind or another. Ordinary C-language memory accesses such as x or y = 0 are simply called "plain" accesses. Early versions of the LKMM had nothing to say about plain accesses. The C standard allows compilers to assume that the variables affected by plain accesses are not concurrently read or written by any other threads or CPUs. This leaves compilers free to implement all manner of transformations or optimizations of code containing plain accesses, making such code very difficult for a memory model to handle. Here is just one example of a possible pitfall: int a = 6; int *x = &a; P0() { int *r1; int r2 = 0; r1 = x; if (r1 != NULL) r2 = READ_ONCE(*r1); } P1() { WRITE_ONCE(x, NULL); } On the face of it, one would expect that when this code runs, the only possible final values for r2 are 6 and 0, depending on whether or not P1's store to x propagates to P0 before P0's load from x executes. But since P0's load from x is a plain access, the compiler may decide to carry out the load twice (for the comparison against NULL, then again for the READ_ONCE()) and eliminate the temporary variable r1. The object code generated for P0 could therefore end up looking rather like this: P0() { int r2 = 0; if (x != NULL) r2 = READ_ONCE(*x); } And now it is obvious that this code runs the risk of dereferencing a NULL pointer, because P1's store to x might propagate to P0 after the test against NULL has been made but before the READ_ONCE() executes. If the original code had said "r1 = READ_ONCE(x)" instead of "r1 = x", the compiler would not have performed this optimization and there would be no possibility of a NULL-pointer dereference. Given the possibility of transformations like this one, the LKMM doesn't try to predict all possible outcomes of code containing plain accesses. It is instead content to determine whether the code violates the compiler's assumptions, which would render the ultimate outcome undefined. In technical terms, the compiler is allowed to assume that when the program executes, there will not be any data races. A "data race" occurs when there are two memory accesses such that: 1. they access the same location, 2. at least one of them is a store, 3. at least one of them is plain, 4. they occur on different CPUs (or in different threads on the same CPU), and 5. they execute concurrently. In the literature, two accesses are said to "conflict" if they satisfy 1 and 2 above. We'll go a little farther and say that two accesses are "race candidates" if they satisfy 1 - 4. Thus, whether or not two race candidates actually do race in a given execution depends on whether they are concurrent. The LKMM tries to determine whether a program contains race candidates which may execute concurrently; if it does then the LKMM says there is a potential data race and makes no predictions about the program's outcome. Determining whether two accesses are race candidates is easy; you can see that all the concepts involved in the definition above are already part of the memory model. The hard part is telling whether they may execute concurrently. The LKMM takes a conservative attitude, assuming that accesses may be concurrent unless it can prove they are not. If two memory accesses aren't concurrent then one must execute before the other. Therefore the LKMM decides two accesses aren't concurrent if they can be connected by a sequence of hb, pb, and rb links (together referred to as xb, for "executes before"). However, there are two complicating factors. If X is a load and X executes before a store Y, then indeed there is no danger of X and Y being concurrent. After all, Y can't have any effect on the value obtained by X until the memory subsystem has propagated Y from its own CPU to X's CPU, which won't happen until some time after Y executes and thus after X executes. But if X is a store, then even if X executes before Y it is still possible that X will propagate to Y's CPU just as Y is executing. In such a case X could very well interfere somehow with Y, and we would have to consider X and Y to be concurrent. Therefore when X is a store, for X and Y to be non-concurrent the LKMM requires not only that X must execute before Y but also that X must propagate to Y's CPU before Y executes. (Or vice versa, of course, if Y executes before X -- then Y must propagate to X's CPU before X executes if Y is a store.) This is expressed by the visibility relation (vis), where X ->vis Y is defined to hold if there is an intermediate event Z such that: X is connected to Z by a possibly empty sequence of cumul-fence links followed by an optional rfe link (if none of these links are present, X and Z are the same event), and either: Z is connected to Y by a strong-fence link followed by a possibly empty sequence of xb links, or: Z is on the same CPU as Y and is connected to Y by a possibly empty sequence of xb links (again, if the sequence is empty it means Z and Y are the same event). The motivations behind this definition are straightforward: cumul-fence memory barriers force stores that are po-before the barrier to propagate to other CPUs before stores that are po-after the barrier. An rfe link from an event W to an event R says that R reads from W, which certainly means that W must have propagated to R's CPU before R executed. strong-fence memory barriers force stores that are po-before the barrier, or that propagate to the barrier's CPU before the barrier executes, to propagate to all CPUs before any events po-after the barrier can execute. To see how this works out in practice, consider our old friend, the MP pattern (with fences and statement labels, but without the conditional test): int buf = 0, flag = 0; P0() { X: WRITE_ONCE(buf, 1); smp_wmb(); W: WRITE_ONCE(flag, 1); } P1() { int r1; int r2 = 0; Z: r1 = READ_ONCE(flag); smp_rmb(); Y: r2 = READ_ONCE(buf); } The smp_wmb() memory barrier gives a cumul-fence link from X to W, and assuming r1 = 1 at the end, there is an rfe link from W to Z. This means that the store to buf must propagate from P0 to P1 before Z executes. Next, Z and Y are on the same CPU and the smp_rmb() fence provides an xb link from Z to Y (i.e., it forces Z to execute before Y). Therefore we have X ->vis Y: X must propagate to Y's CPU before Y executes. The second complicating factor mentioned above arises from the fact that when we are considering data races, some of the memory accesses are plain. Now, although we have not said so explicitly, up to this point most of the relations defined by the LKMM (ppo, hb, prop, cumul-fence, pb, and so on -- including vis) apply only to marked accesses. There are good reasons for this restriction. The compiler is not allowed to apply fancy transformations to marked accesses, and consequently each such access in the source code corresponds more or less directly to a single machine instruction in the object code. But plain accesses are a different story; the compiler may combine them, split them up, duplicate them, eliminate them, invent new ones, and who knows what else. Seeing a plain access in the source code tells you almost nothing about what machine instructions will end up in the object code. Fortunately, the compiler isn't completely free; it is subject to some limitations. For one, it is not allowed to introduce a data race into the object code if the source code does not already contain a data race (if it could, memory models would be useless and no multithreaded code would be safe!). For another, it cannot move a plain access past a compiler barrier. A compiler barrier is a kind of fence, but as the name implies, it only affects the compiler; it does not necessarily have any effect on how instructions are executed by the CPU. In Linux kernel source code, the barrier() function is a compiler barrier. It doesn't give rise directly to any machine instructions in the object code; rather, it affects how the compiler generates the rest of the object code. Given source code like this: ... some memory accesses ... barrier(); ... some other memory accesses ... the barrier() function ensures that the machine instructions corresponding to the first group of accesses will all end po-before any machine instructions corresponding to the second group of accesses -- even if some of the accesses are plain. (Of course, the CPU may then execute some of those accesses out of program order, but we already know how to deal with such issues.) Without the barrier() there would be no such guarantee; the two groups of accesses could be intermingled or even reversed in the object code. The LKMM doesn't say much about the barrier() function, but it does require that all fences are also compiler barriers. In addition, it requires that the ordering properties of memory barriers such as smp_rmb() or smp_store_release() apply to plain accesses as well as to marked accesses. This is the key to analyzing data races. Consider the MP pattern again, now using plain accesses for buf: int buf = 0, flag = 0; P0() { U: buf = 1; smp_wmb(); X: WRITE_ONCE(flag, 1); } P1() { int r1; int r2 = 0; Y: r1 = READ_ONCE(flag); if (r1) { smp_rmb(); V: r2 = buf; } } This program does not contain a data race. Although the U and V accesses are race candidates, the LKMM can prove they are not concurrent as follows: The smp_wmb() fence in P0 is both a compiler barrier and a cumul-fence. It guarantees that no matter what hash of machine instructions the compiler generates for the plain access U, all those instructions will be po-before the fence. Consequently U's store to buf, no matter how it is carried out at the machine level, must propagate to P1 before X's store to flag does. X and Y are both marked accesses. Hence an rfe link from X to Y is a valid indicator that X propagated to P1 before Y executed, i.e., X ->vis Y. (And if there is no rfe link then r1 will be 0, so V will not be executed and ipso facto won't race with U.) The smp_rmb() fence in P1 is a compiler barrier as well as a fence. It guarantees that all the machine-level instructions corresponding to the access V will be po-after the fence, and therefore any loads among those instructions will execute after the fence does and hence after Y does. Thus U's store to buf is forced to propagate to P1 before V's load executes (assuming V does execute), ruling out the possibility of a data race between them. This analysis illustrates how the LKMM deals with plain accesses in general. Suppose R is a plain load and we want to show that R executes before some marked access E. We can do this by finding a marked access X such that R and X are ordered by a suitable fence and X ->xb* E. If E was also a plain access, we would also look for a marked access Y such that X ->xb* Y, and Y and E are ordered by a fence. We describe this arrangement by saying that R is "post-bounded" by X and E is "pre-bounded" by Y. In fact, we go one step further: Since R is a read, we say that R is "r-post-bounded" by X. Similarly, E would be "r-pre-bounded" or "w-pre-bounded" by Y, depending on whether E was a store or a load. This distinction is needed because some fences affect only loads (i.e., smp_rmb()) and some affect only stores (smp_wmb()); otherwise the two types of bounds are the same. And as a degenerate case, we say that a marked access pre-bounds and post-bounds itself (e.g., if R above were a marked load then X could simply be taken to be R itself.) The need to distinguish between r- and w-bounding raises yet another issue. When the source code contains a plain store, the compiler is allowed to put plain loads of the same location into the object code. For example, given the source code: x = 1; the compiler is theoretically allowed to generate object code that looks like: if (x != 1) x = 1; thereby adding a load (and possibly replacing the store entirely). For this reason, whenever the LKMM requires a plain store to be w-pre-bounded or w-post-bounded by a marked access, it also requires the store to be r-pre-bounded or r-post-bounded, so as to handle cases where the compiler adds a load. (This may be overly cautious. We don't know of any examples where a compiler has augmented a store with a load in this fashion, and the Linux kernel developers would probably fight pretty hard to change a compiler if it ever did this. Still, better safe than sorry.) Incidentally, the other tranformation -- augmenting a plain load by adding in a store to the same location -- is not allowed. This is because the compiler cannot know whether any other CPUs might perform a concurrent load from that location. Two concurrent loads don't constitute a race (they can't interfere with each other), but a store does race with a concurrent load. Thus adding a store might create a data race where one was not already present in the source code, something the compiler is forbidden to do. Augmenting a store with a load, on the other hand, is acceptable because doing so won't create a data race unless one already existed. The LKMM includes a second way to pre-bound plain accesses, in addition to fences: an address dependency from a marked load. That is, in the sequence: p = READ_ONCE(ptr); r = *p; the LKMM says that the marked load of ptr pre-bounds the plain load of *p; the marked load must execute before any of the machine instructions corresponding to the plain load. This is a reasonable stipulation, since after all, the CPU can't perform the load of *p until it knows what value p will hold. Furthermore, without some assumption like this one, some usages typical of RCU would count as data races. For example: int a = 1, b; int *ptr = &a; P0() { b = 2; rcu_assign_pointer(ptr, &b); } P1() { int *p; int r; rcu_read_lock(); p = rcu_dereference(ptr); r = *p; rcu_read_unlock(); } (In this example the rcu_read_lock() and rcu_read_unlock() calls don't really do anything, because there aren't any grace periods. They are included merely for the sake of good form; typically P0 would call synchronize_rcu() somewhere after the rcu_assign_pointer().) rcu_assign_pointer() performs a store-release, so the plain store to b is definitely w-post-bounded before the store to ptr, and the two stores will propagate to P1 in that order. However, rcu_dereference() is only equivalent to READ_ONCE(). While it is a marked access, it is not a fence or compiler barrier. Hence the only guarantee we have that the load of ptr in P1 is r-pre-bounded before the load of *p (thus avoiding a race) is the assumption about address dependencies. This is a situation where the compiler can undermine the memory model, and a certain amount of care is required when programming constructs like this one. In particular, comparisons between the pointer and other known addresses can cause trouble. If you have something like: p = rcu_dereference(ptr); if (p == &x) r = *p; then the compiler just might generate object code resembling: p = rcu_dereference(ptr); if (p == &x) r = x; or even: rtemp = x; p = rcu_dereference(ptr); if (p == &x) r = rtemp; which would invalidate the memory model's assumption, since the CPU could now perform the load of x before the load of ptr (there might be a control dependency but no address dependency at the machine level). Finally, it turns out there is a situation in which a plain write does not need to be w-post-bounded: when it is separated from the other race-candidate access by a fence. At first glance this may seem impossible. After all, to be race candidates the two accesses must be on different CPUs, and fences don't link events on different CPUs. Well, normal fences don't -- but rcu-fence can! Here's an example: int x, y; P0() { WRITE_ONCE(x, 1); synchronize_rcu(); y = 3; } P1() { rcu_read_lock(); if (READ_ONCE(x) == 0) y = 2; rcu_read_unlock(); } Do the plain stores to y race? Clearly not if P1 reads a non-zero value for x, so let's assume the READ_ONCE(x) does obtain 0. This means that the read-side critical section in P1 must finish executing before the grace period in P0 does, because RCU's Grace-Period Guarantee says that otherwise P0's store to x would have propagated to P1 before the critical section started and so would have been visible to the READ_ONCE(). (Another way of putting it is that the fre link from the READ_ONCE() to the WRITE_ONCE() gives rise to an rcu-link between those two events.) This means there is an rcu-fence link from P1's "y = 2" store to P0's "y = 3" store, and consequently the first must propagate from P1 to P0 before the second can execute. Therefore the two stores cannot be concurrent and there is no race, even though P1's plain store to y isn't w-post-bounded by any marked accesses. Putting all this material together yields the following picture. For race-candidate stores W and W', where W ->co W', the LKMM says the stores don't race if W can be linked to W' by a w-post-bounded ; vis ; w-pre-bounded sequence. If W is plain then they also have to be linked by an r-post-bounded ; xb* ; w-pre-bounded sequence, and if W' is plain then they also have to be linked by a w-post-bounded ; vis ; r-pre-bounded sequence. For race-candidate load R and store W, the LKMM says the two accesses don't race if R can be linked to W by an r-post-bounded ; xb* ; w-pre-bounded sequence or if W can be linked to R by a w-post-bounded ; vis ; r-pre-bounded sequence. For the cases involving a vis link, the LKMM also accepts sequences in which W is linked to W' or R by a strong-fence ; xb* ; {w and/or r}-pre-bounded sequence with no post-bounding, and in every case the LKMM also allows the link simply to be a fence with no bounding at all. If no sequence of the appropriate sort exists, the LKMM says that the accesses race. There is one more part of the LKMM related to plain accesses (although not to data races) we should discuss. Recall that many relations such as hb are limited to marked accesses only. As a result, the happens-before, propagates-before, and rcu axioms (which state that various relation must not contain a cycle) doesn't apply to plain accesses. Nevertheless, we do want to rule out such cycles, because they don't make sense even for plain accesses. To this end, the LKMM imposes three extra restrictions, together called the "plain-coherence" axiom because of their resemblance to the rules used by the operational model to ensure cache coherence (that is, the rules governing the memory subsystem's choice of a store to satisfy a load request and its determination of where a store will fall in the coherence order): If R and W are race candidates and it is possible to link R to W by one of the xb* sequences listed above, then W ->rfe R is not allowed (i.e., a load cannot read from a store that it executes before, even if one or both is plain). If W and R are race candidates and it is possible to link W to R by one of the vis sequences listed above, then R ->fre W is not allowed (i.e., if a store is visible to a load then the load must read from that store or one coherence-after it). If W and W' are race candidates and it is possible to link W to W' by one of the vis sequences listed above, then W' ->co W is not allowed (i.e., if one store is visible to a second then the second must come after the first in the coherence order). This is the extent to which the LKMM deals with plain accesses. Perhaps it could say more (for example, plain accesses might contribute to the ppo relation), but at the moment it seems that this minimal, conservative approach is good enough. ODDS AND ENDS ------------- This section covers material that didn't quite fit anywhere in the earlier sections. The descriptions in this document don't always match the formal version of the LKMM exactly. For example, the actual formal definition of the prop relation makes the initial coe or fre part optional, and it doesn't require the events linked by the relation to be on the same CPU. These differences are very unimportant; indeed, instances where the coe/fre part of prop is missing are of no interest because all the other parts (fences and rfe) are already included in hb anyway, and where the formal model adds prop into hb, it includes an explicit requirement that the events being linked are on the same CPU. Another minor difference has to do with events that are both memory accesses and fences, such as those corresponding to smp_load_acquire() calls. In the formal model, these events aren't actually both reads and fences; rather, they are read events with an annotation marking them as acquires. (Or write events annotated as releases, in the case smp_store_release().) The final effect is the same. Although we didn't mention it above, the instruction execution ordering provided by the smp_rmb() fence doesn't apply to read events that are part of a non-value-returning atomic update. For instance, given: atomic_inc(&x); smp_rmb(); r1 = READ_ONCE(y); it is not guaranteed that the load from y will execute after the update to x. This is because the ARMv8 architecture allows non-value-returning atomic operations effectively to be executed off the CPU. Basically, the CPU tells the memory subsystem to increment x, and then the increment is carried out by the memory hardware with no further involvement from the CPU. Since the CPU doesn't ever read the value of x, there is nothing for the smp_rmb() fence to act on. The LKMM defines a few extra synchronization operations in terms of things we have already covered. In particular, rcu_dereference() is treated as READ_ONCE() and rcu_assign_pointer() is treated as smp_store_release() -- which is basically how the Linux kernel treats them. Although we said that plain accesses are not linked by the ppo relation, they do contribute to it indirectly. Namely, when there is an address dependency from a marked load R to a plain store W, followed by smp_wmb() and then a marked store W', the LKMM creates a ppo link from R to W'. The reasoning behind this is perhaps a little shaky, but essentially it says there is no way to generate object code for this source code in which W' could execute before R. Just as with pre-bounding by address dependencies, it is possible for the compiler to undermine this relation if sufficient care is not taken. There are a few oddball fences which need special treatment: smp_mb__before_atomic(), smp_mb__after_atomic(), and smp_mb__after_spinlock(). The LKMM uses fence events with special annotations for them; they act as strong fences just like smp_mb() except for the sets of events that they order. Instead of ordering all po-earlier events against all po-later events, as smp_mb() does, they behave as follows: smp_mb__before_atomic() orders all po-earlier events against po-later atomic updates and the events following them; smp_mb__after_atomic() orders po-earlier atomic updates and the events preceding them against all po-later events; smp_mb__after_spinlock() orders po-earlier lock acquisition events and the events preceding them against all po-later events. Interestingly, RCU and locking each introduce the possibility of deadlock. When faced with code sequences such as: spin_lock(&s); spin_lock(&s); spin_unlock(&s); spin_unlock(&s); or: rcu_read_lock(); synchronize_rcu(); rcu_read_unlock(); what does the LKMM have to say? Answer: It says there are no allowed executions at all, which makes sense. But this can also lead to misleading results, because if a piece of code has multiple possible executions, some of which deadlock, the model will report only on the non-deadlocking executions. For example: int x, y; P0() { int r0; WRITE_ONCE(x, 1); r0 = READ_ONCE(y); } P1() { rcu_read_lock(); if (READ_ONCE(x) > 0) { WRITE_ONCE(y, 36); synchronize_rcu(); } rcu_read_unlock(); } Is it possible to end up with r0 = 36 at the end? The LKMM will tell you it is not, but the model won't mention that this is because P1 will self-deadlock in the executions where it stores 36 in y. |