Loading...
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 | ============================ LINUX KERNEL MEMORY BARRIERS ============================ By: David Howells <dhowells@redhat.com> Paul E. McKenney <paulmck@linux.ibm.com> Will Deacon <will.deacon@arm.com> Peter Zijlstra <peterz@infradead.org> ========== DISCLAIMER ========== This document is not a specification; it is intentionally (for the sake of brevity) and unintentionally (due to being human) incomplete. This document is meant as a guide to using the various memory barriers provided by Linux, but in case of any doubt (and there are many) please ask. Some doubts may be resolved by referring to the formal memory consistency model and related documentation at tools/memory-model/. Nevertheless, even this memory model should be viewed as the collective opinion of its maintainers rather than as an infallible oracle. To repeat, this document is not a specification of what Linux expects from hardware. The purpose of this document is twofold: (1) to specify the minimum functionality that one can rely on for any particular barrier, and (2) to provide a guide as to how to use the barriers that are available. Note that an architecture can provide more than the minimum requirement for any particular barrier, but if the architecture provides less than that, that architecture is incorrect. Note also that it is possible that a barrier may be a no-op for an architecture because the way that arch works renders an explicit barrier unnecessary in that case. ======== CONTENTS ======== (*) Abstract memory access model. - Device operations. - Guarantees. (*) What are memory barriers? - Varieties of memory barrier. - What may not be assumed about memory barriers? - Data dependency barriers (historical). - Control dependencies. - SMP barrier pairing. - Examples of memory barrier sequences. - Read memory barriers vs load speculation. - Multicopy atomicity. (*) Explicit kernel barriers. - Compiler barrier. - CPU memory barriers. (*) Implicit kernel memory barriers. - Lock acquisition functions. - Interrupt disabling functions. - Sleep and wake-up functions. - Miscellaneous functions. (*) Inter-CPU acquiring barrier effects. - Acquires vs memory accesses. (*) Where are memory barriers needed? - Interprocessor interaction. - Atomic operations. - Accessing devices. - Interrupts. (*) Kernel I/O barrier effects. (*) Assumed minimum execution ordering model. (*) The effects of the cpu cache. - Cache coherency. - Cache coherency vs DMA. - Cache coherency vs MMIO. (*) The things CPUs get up to. - And then there's the Alpha. - Virtual Machine Guests. (*) Example uses. - Circular buffers. (*) References. ============================ ABSTRACT MEMORY ACCESS MODEL ============================ Consider the following abstract model of the system: : : : : : : +-------+ : +--------+ : +-------+ | | : | | : | | | | : | | : | | | CPU 1 |<----->| Memory |<----->| CPU 2 | | | : | | : | | | | : | | : | | +-------+ : +--------+ : +-------+ ^ : ^ : ^ | : | : | | : | : | | : v : | | : +--------+ : | | : | | : | | : | | : | +---------->| Device |<----------+ : | | : : | | : : +--------+ : : : Each CPU executes a program that generates memory access operations. In the abstract CPU, memory operation ordering is very relaxed, and a CPU may actually perform the memory operations in any order it likes, provided program causality appears to be maintained. Similarly, the compiler may also arrange the instructions it emits in any order it likes, provided it doesn't affect the apparent operation of the program. So in the above diagram, the effects of the memory operations performed by a CPU are perceived by the rest of the system as the operations cross the interface between the CPU and rest of the system (the dotted lines). For example, consider the following sequence of events: CPU 1 CPU 2 =============== =============== { A == 1; B == 2 } A = 3; x = B; B = 4; y = A; The set of accesses as seen by the memory system in the middle can be arranged in 24 different combinations: STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4 STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3 STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4 STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4 STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3 STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4 STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4 STORE B=4, ... ... and can thus result in four different combinations of values: x == 2, y == 1 x == 2, y == 3 x == 4, y == 1 x == 4, y == 3 Furthermore, the stores committed by a CPU to the memory system may not be perceived by the loads made by another CPU in the same order as the stores were committed. As a further example, consider this sequence of events: CPU 1 CPU 2 =============== =============== { A == 1, B == 2, C == 3, P == &A, Q == &C } B = 4; Q = P; P = &B; D = *Q; There is an obvious data dependency here, as the value loaded into D depends on the address retrieved from P by CPU 2. At the end of the sequence, any of the following results are possible: (Q == &A) and (D == 1) (Q == &B) and (D == 2) (Q == &B) and (D == 4) Note that CPU 2 will never try and load C into D because the CPU will load P into Q before issuing the load of *Q. DEVICE OPERATIONS ----------------- Some devices present their control interfaces as collections of memory locations, but the order in which the control registers are accessed is very important. For instance, imagine an ethernet card with a set of internal registers that are accessed through an address port register (A) and a data port register (D). To read internal register 5, the following code might then be used: *A = 5; x = *D; but this might show up as either of the following two sequences: STORE *A = 5, x = LOAD *D x = LOAD *D, STORE *A = 5 the second of which will almost certainly result in a malfunction, since it set the address _after_ attempting to read the register. GUARANTEES ---------- There are some minimal guarantees that may be expected of a CPU: (*) On any given CPU, dependent memory accesses will be issued in order, with respect to itself. This means that for: Q = READ_ONCE(P); D = READ_ONCE(*Q); the CPU will issue the following memory operations: Q = LOAD P, D = LOAD *Q and always in that order. However, on DEC Alpha, READ_ONCE() also emits a memory-barrier instruction, so that a DEC Alpha CPU will instead issue the following memory operations: Q = LOAD P, MEMORY_BARRIER, D = LOAD *Q, MEMORY_BARRIER Whether on DEC Alpha or not, the READ_ONCE() also prevents compiler mischief. (*) Overlapping loads and stores within a particular CPU will appear to be ordered within that CPU. This means that for: a = READ_ONCE(*X); WRITE_ONCE(*X, b); the CPU will only issue the following sequence of memory operations: a = LOAD *X, STORE *X = b And for: WRITE_ONCE(*X, c); d = READ_ONCE(*X); the CPU will only issue: STORE *X = c, d = LOAD *X (Loads and stores overlap if they are targeted at overlapping pieces of memory). And there are a number of things that _must_ or _must_not_ be assumed: (*) It _must_not_ be assumed that the compiler will do what you want with memory references that are not protected by READ_ONCE() and WRITE_ONCE(). Without them, the compiler is within its rights to do all sorts of "creative" transformations, which are covered in the COMPILER BARRIER section. (*) It _must_not_ be assumed that independent loads and stores will be issued in the order given. This means that for: X = *A; Y = *B; *D = Z; we may get any of the following sequences: X = LOAD *A, Y = LOAD *B, STORE *D = Z X = LOAD *A, STORE *D = Z, Y = LOAD *B Y = LOAD *B, X = LOAD *A, STORE *D = Z Y = LOAD *B, STORE *D = Z, X = LOAD *A STORE *D = Z, X = LOAD *A, Y = LOAD *B STORE *D = Z, Y = LOAD *B, X = LOAD *A (*) It _must_ be assumed that overlapping memory accesses may be merged or discarded. This means that for: X = *A; Y = *(A + 4); we may get any one of the following sequences: X = LOAD *A; Y = LOAD *(A + 4); Y = LOAD *(A + 4); X = LOAD *A; {X, Y} = LOAD {*A, *(A + 4) }; And for: *A = X; *(A + 4) = Y; we may get any of: STORE *A = X; STORE *(A + 4) = Y; STORE *(A + 4) = Y; STORE *A = X; STORE {*A, *(A + 4) } = {X, Y}; And there are anti-guarantees: (*) These guarantees do not apply to bitfields, because compilers often generate code to modify these using non-atomic read-modify-write sequences. Do not attempt to use bitfields to synchronize parallel algorithms. (*) Even in cases where bitfields are protected by locks, all fields in a given bitfield must be protected by one lock. If two fields in a given bitfield are protected by different locks, the compiler's non-atomic read-modify-write sequences can cause an update to one field to corrupt the value of an adjacent field. (*) These guarantees apply only to properly aligned and sized scalar variables. "Properly sized" currently means variables that are the same size as "char", "short", "int" and "long". "Properly aligned" means the natural alignment, thus no constraints for "char", two-byte alignment for "short", four-byte alignment for "int", and either four-byte or eight-byte alignment for "long", on 32-bit and 64-bit systems, respectively. Note that these guarantees were introduced into the C11 standard, so beware when using older pre-C11 compilers (for example, gcc 4.6). The portion of the standard containing this guarantee is Section 3.14, which defines "memory location" as follows: memory location either an object of scalar type, or a maximal sequence of adjacent bit-fields all having nonzero width NOTE 1: Two threads of execution can update and access separate memory locations without interfering with each other. NOTE 2: A bit-field and an adjacent non-bit-field member are in separate memory locations. The same applies to two bit-fields, if one is declared inside a nested structure declaration and the other is not, or if the two are separated by a zero-length bit-field declaration, or if they are separated by a non-bit-field member declaration. It is not safe to concurrently update two bit-fields in the same structure if all members declared between them are also bit-fields, no matter what the sizes of those intervening bit-fields happen to be. ========================= WHAT ARE MEMORY BARRIERS? ========================= As can be seen above, independent memory operations are effectively performed in random order, but this can be a problem for CPU-CPU interaction and for I/O. What is required is some way of intervening to instruct the compiler and the CPU to restrict the order. Memory barriers are such interventions. They impose a perceived partial ordering over the memory operations on either side of the barrier. Such enforcement is important because the CPUs and other devices in a system can use a variety of tricks to improve performance, including reordering, deferral and combination of memory operations; speculative loads; speculative branch prediction and various types of caching. Memory barriers are used to override or suppress these tricks, allowing the code to sanely control the interaction of multiple CPUs and/or devices. VARIETIES OF MEMORY BARRIER --------------------------- Memory barriers come in four basic varieties: (1) Write (or store) memory barriers. A write memory barrier gives a guarantee that all the STORE operations specified before the barrier will appear to happen before all the STORE operations specified after the barrier with respect to the other components of the system. A write barrier is a partial ordering on stores only; it is not required to have any effect on loads. A CPU can be viewed as committing a sequence of store operations to the memory system as time progresses. All stores _before_ a write barrier will occur _before_ all the stores after the write barrier. [!] Note that write barriers should normally be paired with read or data dependency barriers; see the "SMP barrier pairing" subsection. (2) Data dependency barriers. A data dependency barrier is a weaker form of read barrier. In the case where two loads are performed such that the second depends on the result of the first (eg: the first load retrieves the address to which the second load will be directed), a data dependency barrier would be required to make sure that the target of the second load is updated after the address obtained by the first load is accessed. A data dependency barrier is a partial ordering on interdependent loads only; it is not required to have any effect on stores, independent loads or overlapping loads. As mentioned in (1), the other CPUs in the system can be viewed as committing sequences of stores to the memory system that the CPU being considered can then perceive. A data dependency barrier issued by the CPU under consideration guarantees that for any load preceding it, if that load touches one of a sequence of stores from another CPU, then by the time the barrier completes, the effects of all the stores prior to that touched by the load will be perceptible to any loads issued after the data dependency barrier. See the "Examples of memory barrier sequences" subsection for diagrams showing the ordering constraints. [!] Note that the first load really has to have a _data_ dependency and not a control dependency. If the address for the second load is dependent on the first load, but the dependency is through a conditional rather than actually loading the address itself, then it's a _control_ dependency and a full read barrier or better is required. See the "Control dependencies" subsection for more information. [!] Note that data dependency barriers should normally be paired with write barriers; see the "SMP barrier pairing" subsection. (3) Read (or load) memory barriers. A read barrier is a data dependency barrier plus a guarantee that all the LOAD operations specified before the barrier will appear to happen before all the LOAD operations specified after the barrier with respect to the other components of the system. A read barrier is a partial ordering on loads only; it is not required to have any effect on stores. Read memory barriers imply data dependency barriers, and so can substitute for them. [!] Note that read barriers should normally be paired with write barriers; see the "SMP barrier pairing" subsection. (4) General memory barriers. A general memory barrier gives a guarantee that all the LOAD and STORE operations specified before the barrier will appear to happen before all the LOAD and STORE operations specified after the barrier with respect to the other components of the system. A general memory barrier is a partial ordering over both loads and stores. General memory barriers imply both read and write memory barriers, and so can substitute for either. And a couple of implicit varieties: (5) ACQUIRE operations. This acts as a one-way permeable barrier. It guarantees that all memory operations after the ACQUIRE operation will appear to happen after the ACQUIRE operation with respect to the other components of the system. ACQUIRE operations include LOCK operations and both smp_load_acquire() and smp_cond_load_acquire() operations. Memory operations that occur before an ACQUIRE operation may appear to happen after it completes. An ACQUIRE operation should almost always be paired with a RELEASE operation. (6) RELEASE operations. This also acts as a one-way permeable barrier. It guarantees that all memory operations before the RELEASE operation will appear to happen before the RELEASE operation with respect to the other components of the system. RELEASE operations include UNLOCK operations and smp_store_release() operations. Memory operations that occur after a RELEASE operation may appear to happen before it completes. The use of ACQUIRE and RELEASE operations generally precludes the need for other sorts of memory barrier. In addition, a RELEASE+ACQUIRE pair is -not- guaranteed to act as a full memory barrier. However, after an ACQUIRE on a given variable, all memory accesses preceding any prior RELEASE on that same variable are guaranteed to be visible. In other words, within a given variable's critical section, all accesses of all previous critical sections for that variable are guaranteed to have completed. This means that ACQUIRE acts as a minimal "acquire" operation and RELEASE acts as a minimal "release" operation. A subset of the atomic operations described in atomic_t.txt have ACQUIRE and RELEASE variants in addition to fully-ordered and relaxed (no barrier semantics) definitions. For compound atomics performing both a load and a store, ACQUIRE semantics apply only to the load and RELEASE semantics apply only to the store portion of the operation. Memory barriers are only required where there's a possibility of interaction between two CPUs or between a CPU and a device. If it can be guaranteed that there won't be any such interaction in any particular piece of code, then memory barriers are unnecessary in that piece of code. Note that these are the _minimum_ guarantees. Different architectures may give more substantial guarantees, but they may _not_ be relied upon outside of arch specific code. WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS? ---------------------------------------------- There are certain things that the Linux kernel memory barriers do not guarantee: (*) There is no guarantee that any of the memory accesses specified before a memory barrier will be _complete_ by the completion of a memory barrier instruction; the barrier can be considered to draw a line in that CPU's access queue that accesses of the appropriate type may not cross. (*) There is no guarantee that issuing a memory barrier on one CPU will have any direct effect on another CPU or any other hardware in the system. The indirect effect will be the order in which the second CPU sees the effects of the first CPU's accesses occur, but see the next point: (*) There is no guarantee that a CPU will see the correct order of effects from a second CPU's accesses, even _if_ the second CPU uses a memory barrier, unless the first CPU _also_ uses a matching memory barrier (see the subsection on "SMP Barrier Pairing"). (*) There is no guarantee that some intervening piece of off-the-CPU hardware[*] will not reorder the memory accesses. CPU cache coherency mechanisms should propagate the indirect effects of a memory barrier between CPUs, but might not do so in order. [*] For information on bus mastering DMA and coherency please read: Documentation/driver-api/pci/pci.rst Documentation/core-api/dma-api-howto.rst Documentation/core-api/dma-api.rst DATA DEPENDENCY BARRIERS (HISTORICAL) ------------------------------------- As of v4.15 of the Linux kernel, an smp_mb() was added to READ_ONCE() for DEC Alpha, which means that about the only people who need to pay attention to this section are those working on DEC Alpha architecture-specific code and those working on READ_ONCE() itself. For those who need it, and for those who are interested in the history, here is the story of data-dependency barriers. The usage requirements of data dependency barriers are a little subtle, and it's not always obvious that they're needed. To illustrate, consider the following sequence of events: CPU 1 CPU 2 =============== =============== { A == 1, B == 2, C == 3, P == &A, Q == &C } B = 4; <write barrier> WRITE_ONCE(P, &B); Q = READ_ONCE(P); D = *Q; There's a clear data dependency here, and it would seem that by the end of the sequence, Q must be either &A or &B, and that: (Q == &A) implies (D == 1) (Q == &B) implies (D == 4) But! CPU 2's perception of P may be updated _before_ its perception of B, thus leading to the following situation: (Q == &B) and (D == 2) ???? While this may seem like a failure of coherency or causality maintenance, it isn't, and this behaviour can be observed on certain real CPUs (such as the DEC Alpha). To deal with this, a data dependency barrier or better must be inserted between the address load and the data load: CPU 1 CPU 2 =============== =============== { A == 1, B == 2, C == 3, P == &A, Q == &C } B = 4; <write barrier> WRITE_ONCE(P, &B); Q = READ_ONCE(P); <data dependency barrier> D = *Q; This enforces the occurrence of one of the two implications, and prevents the third possibility from arising. [!] Note that this extremely counterintuitive situation arises most easily on machines with split caches, so that, for example, one cache bank processes even-numbered cache lines and the other bank processes odd-numbered cache lines. The pointer P might be stored in an odd-numbered cache line, and the variable B might be stored in an even-numbered cache line. Then, if the even-numbered bank of the reading CPU's cache is extremely busy while the odd-numbered bank is idle, one can see the new value of the pointer P (&B), but the old value of the variable B (2). A data-dependency barrier is not required to order dependent writes because the CPUs that the Linux kernel supports don't do writes until they are certain (1) that the write will actually happen, (2) of the location of the write, and (3) of the value to be written. But please carefully read the "CONTROL DEPENDENCIES" section and the Documentation/RCU/rcu_dereference.rst file: The compiler can and does break dependencies in a great many highly creative ways. CPU 1 CPU 2 =============== =============== { A == 1, B == 2, C = 3, P == &A, Q == &C } B = 4; <write barrier> WRITE_ONCE(P, &B); Q = READ_ONCE(P); WRITE_ONCE(*Q, 5); Therefore, no data-dependency barrier is required to order the read into Q with the store into *Q. In other words, this outcome is prohibited, even without a data-dependency barrier: (Q == &B) && (B == 4) Please note that this pattern should be rare. After all, the whole point of dependency ordering is to -prevent- writes to the data structure, along with the expensive cache misses associated with those writes. This pattern can be used to record rare error conditions and the like, and the CPUs' naturally occurring ordering prevents such records from being lost. Note well that the ordering provided by a data dependency is local to the CPU containing it. See the section on "Multicopy atomicity" for more information. The data dependency barrier is very important to the RCU system, for example. See rcu_assign_pointer() and rcu_dereference() in include/linux/rcupdate.h. This permits the current target of an RCU'd pointer to be replaced with a new modified target, without the replacement target appearing to be incompletely initialised. See also the subsection on "Cache Coherency" for a more thorough example. CONTROL DEPENDENCIES -------------------- Control dependencies can be a bit tricky because current compilers do not understand them. The purpose of this section is to help you prevent the compiler's ignorance from breaking your code. A load-load control dependency requires a full read memory barrier, not simply a data dependency barrier to make it work correctly. Consider the following bit of code: q = READ_ONCE(a); if (q) { <data dependency barrier> /* BUG: No data dependency!!! */ p = READ_ONCE(b); } This will not have the desired effect because there is no actual data dependency, but rather a control dependency that the CPU may short-circuit by attempting to predict the outcome in advance, so that other CPUs see the load from b as having happened before the load from a. In such a case what's actually required is: q = READ_ONCE(a); if (q) { <read barrier> p = READ_ONCE(b); } However, stores are not speculated. This means that ordering -is- provided for load-store control dependencies, as in the following example: q = READ_ONCE(a); if (q) { WRITE_ONCE(b, 1); } Control dependencies pair normally with other types of barriers. That said, please note that neither READ_ONCE() nor WRITE_ONCE() are optional! Without the READ_ONCE(), the compiler might combine the load from 'a' with other loads from 'a'. Without the WRITE_ONCE(), the compiler might combine the store to 'b' with other stores to 'b'. Either can result in highly counterintuitive effects on ordering. Worse yet, if the compiler is able to prove (say) that the value of variable 'a' is always non-zero, it would be well within its rights to optimize the original example by eliminating the "if" statement as follows: q = a; b = 1; /* BUG: Compiler and CPU can both reorder!!! */ So don't leave out the READ_ONCE(). It is tempting to try to enforce ordering on identical stores on both branches of the "if" statement as follows: q = READ_ONCE(a); if (q) { barrier(); WRITE_ONCE(b, 1); do_something(); } else { barrier(); WRITE_ONCE(b, 1); do_something_else(); } Unfortunately, current compilers will transform this as follows at high optimization levels: q = READ_ONCE(a); barrier(); WRITE_ONCE(b, 1); /* BUG: No ordering vs. load from a!!! */ if (q) { /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */ do_something(); } else { /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */ do_something_else(); } Now there is no conditional between the load from 'a' and the store to 'b', which means that the CPU is within its rights to reorder them: The conditional is absolutely required, and must be present in the assembly code even after all compiler optimizations have been applied. Therefore, if you need ordering in this example, you need explicit memory barriers, for example, smp_store_release(): q = READ_ONCE(a); if (q) { smp_store_release(&b, 1); do_something(); } else { smp_store_release(&b, 1); do_something_else(); } In contrast, without explicit memory barriers, two-legged-if control ordering is guaranteed only when the stores differ, for example: q = READ_ONCE(a); if (q) { WRITE_ONCE(b, 1); do_something(); } else { WRITE_ONCE(b, 2); do_something_else(); } The initial READ_ONCE() is still required to prevent the compiler from proving the value of 'a'. In addition, you need to be careful what you do with the local variable 'q', otherwise the compiler might be able to guess the value and again remove the needed conditional. For example: q = READ_ONCE(a); if (q % MAX) { WRITE_ONCE(b, 1); do_something(); } else { WRITE_ONCE(b, 2); do_something_else(); } If MAX is defined to be 1, then the compiler knows that (q % MAX) is equal to zero, in which case the compiler is within its rights to transform the above code into the following: q = READ_ONCE(a); WRITE_ONCE(b, 2); do_something_else(); Given this transformation, the CPU is not required to respect the ordering between the load from variable 'a' and the store to variable 'b'. It is tempting to add a barrier(), but this does not help. The conditional is gone, and the barrier won't bring it back. Therefore, if you are relying on this ordering, you should make sure that MAX is greater than one, perhaps as follows: q = READ_ONCE(a); BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */ if (q % MAX) { WRITE_ONCE(b, 1); do_something(); } else { WRITE_ONCE(b, 2); do_something_else(); } Please note once again that the stores to 'b' differ. If they were identical, as noted earlier, the compiler could pull this store outside of the 'if' statement. You must also be careful not to rely too much on boolean short-circuit evaluation. Consider this example: q = READ_ONCE(a); if (q || 1 > 0) WRITE_ONCE(b, 1); Because the first condition cannot fault and the second condition is always true, the compiler can transform this example as following, defeating control dependency: q = READ_ONCE(a); WRITE_ONCE(b, 1); This example underscores the need to ensure that the compiler cannot out-guess your code. More generally, although READ_ONCE() does force the compiler to actually emit code for a given load, it does not force the compiler to use the results. In addition, control dependencies apply only to the then-clause and else-clause of the if-statement in question. In particular, it does not necessarily apply to code following the if-statement: q = READ_ONCE(a); if (q) { WRITE_ONCE(b, 1); } else { WRITE_ONCE(b, 2); } WRITE_ONCE(c, 1); /* BUG: No ordering against the read from 'a'. */ It is tempting to argue that there in fact is ordering because the compiler cannot reorder volatile accesses and also cannot reorder the writes to 'b' with the condition. Unfortunately for this line of reasoning, the compiler might compile the two writes to 'b' as conditional-move instructions, as in this fanciful pseudo-assembly language: ld r1,a cmp r1,$0 cmov,ne r4,$1 cmov,eq r4,$2 st r4,b st $1,c A weakly ordered CPU would have no dependency of any sort between the load from 'a' and the store to 'c'. The control dependencies would extend only to the pair of cmov instructions and the store depending on them. In short, control dependencies apply only to the stores in the then-clause and else-clause of the if-statement in question (including functions invoked by those two clauses), not to code following that if-statement. Note well that the ordering provided by a control dependency is local to the CPU containing it. See the section on "Multicopy atomicity" for more information. In summary: (*) Control dependencies can order prior loads against later stores. However, they do -not- guarantee any other sort of ordering: Not prior loads against later loads, nor prior stores against later anything. If you need these other forms of ordering, use smp_rmb(), smp_wmb(), or, in the case of prior stores and later loads, smp_mb(). (*) If both legs of the "if" statement begin with identical stores to the same variable, then those stores must be ordered, either by preceding both of them with smp_mb() or by using smp_store_release() to carry out the stores. Please note that it is -not- sufficient to use barrier() at beginning of each leg of the "if" statement because, as shown by the example above, optimizing compilers can destroy the control dependency while respecting the letter of the barrier() law. (*) Control dependencies require at least one run-time conditional between the prior load and the subsequent store, and this conditional must involve the prior load. If the compiler is able to optimize the conditional away, it will have also optimized away the ordering. Careful use of READ_ONCE() and WRITE_ONCE() can help to preserve the needed conditional. (*) Control dependencies require that the compiler avoid reordering the dependency into nonexistence. Careful use of READ_ONCE() or atomic{,64}_read() can help to preserve your control dependency. Please see the COMPILER BARRIER section for more information. (*) Control dependencies apply only to the then-clause and else-clause of the if-statement containing the control dependency, including any functions that these two clauses call. Control dependencies do -not- apply to code following the if-statement containing the control dependency. (*) Control dependencies pair normally with other types of barriers. (*) Control dependencies do -not- provide multicopy atomicity. If you need all the CPUs to see a given store at the same time, use smp_mb(). (*) Compilers do not understand control dependencies. It is therefore your job to ensure that they do not break your code. SMP BARRIER PAIRING ------------------- When dealing with CPU-CPU interactions, certain types of memory barrier should always be paired. A lack of appropriate pairing is almost certainly an error. General barriers pair with each other, though they also pair with most other types of barriers, albeit without multicopy atomicity. An acquire barrier pairs with a release barrier, but both may also pair with other barriers, including of course general barriers. A write barrier pairs with a data dependency barrier, a control dependency, an acquire barrier, a release barrier, a read barrier, or a general barrier. Similarly a read barrier, control dependency, or a data dependency barrier pairs with a write barrier, an acquire barrier, a release barrier, or a general barrier: CPU 1 CPU 2 =============== =============== WRITE_ONCE(a, 1); <write barrier> WRITE_ONCE(b, 2); x = READ_ONCE(b); <read barrier> y = READ_ONCE(a); Or: CPU 1 CPU 2 =============== =============================== a = 1; <write barrier> WRITE_ONCE(b, &a); x = READ_ONCE(b); <data dependency barrier> y = *x; Or even: CPU 1 CPU 2 =============== =============================== r1 = READ_ONCE(y); <general barrier> WRITE_ONCE(x, 1); if (r2 = READ_ONCE(x)) { <implicit control dependency> WRITE_ONCE(y, 1); } assert(r1 == 0 || r2 == 0); Basically, the read barrier always has to be there, even though it can be of the "weaker" type. [!] Note that the stores before the write barrier would normally be expected to match the loads after the read barrier or the data dependency barrier, and vice versa: CPU 1 CPU 2 =================== =================== WRITE_ONCE(a, 1); }---- --->{ v = READ_ONCE(c); WRITE_ONCE(b, 2); } \ / { w = READ_ONCE(d); <write barrier> \ <read barrier> WRITE_ONCE(c, 3); } / \ { x = READ_ONCE(a); WRITE_ONCE(d, 4); }---- --->{ y = READ_ONCE(b); EXAMPLES OF MEMORY BARRIER SEQUENCES ------------------------------------ Firstly, write barriers act as partial orderings on store operations. Consider the following sequence of events: CPU 1 ======================= STORE A = 1 STORE B = 2 STORE C = 3 <write barrier> STORE D = 4 STORE E = 5 This sequence of events is committed to the memory coherence system in an order that the rest of the system might perceive as the unordered set of { STORE A, STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E }: +-------+ : : | | +------+ | |------>| C=3 | } /\ | | : +------+ }----- \ -----> Events perceptible to | | : | A=1 | } \/ the rest of the system | | : +------+ } | CPU 1 | : | B=2 | } | | +------+ } | | wwwwwwwwwwwwwwww } <--- At this point the write barrier | | +------+ } requires all stores prior to the | | : | E=5 | } barrier to be committed before | | : +------+ } further stores may take place | |------>| D=4 | } | | +------+ +-------+ : : | | Sequence in which stores are committed to the | memory system by CPU 1 V Secondly, data dependency barriers act as partial orderings on data-dependent loads. Consider the following sequence of events: CPU 1 CPU 2 ======================= ======================= { B = 7; X = 9; Y = 8; C = &Y } STORE A = 1 STORE B = 2 <write barrier> STORE C = &B LOAD X STORE D = 4 LOAD C (gets &B) LOAD *C (reads B) Without intervention, CPU 2 may perceive the events on CPU 1 in some effectively random order, despite the write barrier issued by CPU 1: +-------+ : : : : | | +------+ +-------+ | Sequence of update | |------>| B=2 |----- --->| Y->8 | | of perception on | | : +------+ \ +-------+ | CPU 2 | CPU 1 | : | A=1 | \ --->| C->&Y | V | | +------+ | +-------+ | | wwwwwwwwwwwwwwww | : : | | +------+ | : : | | : | C=&B |--- | : : +-------+ | | : +------+ \ | +-------+ | | | |------>| D=4 | ----------->| C->&B |------>| | | | +------+ | +-------+ | | +-------+ : : | : : | | | : : | | | : : | CPU 2 | | +-------+ | | Apparently incorrect ---> | | B->7 |------>| | perception of B (!) | +-------+ | | | : : | | | +-------+ | | The load of X holds ---> \ | X->9 |------>| | up the maintenance \ +-------+ | | of coherence of B ----->| B->2 | +-------+ +-------+ : : In the above example, CPU 2 perceives that B is 7, despite the load of *C (which would be B) coming after the LOAD of C. If, however, a data dependency barrier were to be placed between the load of C and the load of *C (ie: B) on CPU 2: CPU 1 CPU 2 ======================= ======================= { B = 7; X = 9; Y = 8; C = &Y } STORE A = 1 STORE B = 2 <write barrier> STORE C = &B LOAD X STORE D = 4 LOAD C (gets &B) <data dependency barrier> LOAD *C (reads B) then the following will occur: +-------+ : : : : | | +------+ +-------+ | |------>| B=2 |----- --->| Y->8 | | | : +------+ \ +-------+ | CPU 1 | : | A=1 | \ --->| C->&Y | | | +------+ | +-------+ | | wwwwwwwwwwwwwwww | : : | | +------+ | : : | | : | C=&B |--- | : : +-------+ | | : +------+ \ | +-------+ | | | |------>| D=4 | ----------->| C->&B |------>| | | | +------+ | +-------+ | | +-------+ : : | : : | | | : : | | | : : | CPU 2 | | +-------+ | | | | X->9 |------>| | | +-------+ | | Makes sure all effects ---> \ ddddddddddddddddd | | prior to the store of C \ +-------+ | | are perceptible to ----->| B->2 |------>| | subsequent loads +-------+ | | : : +-------+ And thirdly, a read barrier acts as a partial order on loads. Consider the following sequence of events: CPU 1 CPU 2 ======================= ======================= { A = 0, B = 9 } STORE A=1 <write barrier> STORE B=2 LOAD B LOAD A Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in some effectively random order, despite the write barrier issued by CPU 1: +-------+ : : : : | | +------+ +-------+ | |------>| A=1 |------ --->| A->0 | | | +------+ \ +-------+ | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | | | +------+ | +-------+ | |------>| B=2 |--- | : : | | +------+ \ | : : +-------+ +-------+ : : \ | +-------+ | | ---------->| B->2 |------>| | | +-------+ | CPU 2 | | | A->0 |------>| | | +-------+ | | | : : +-------+ \ : : \ +-------+ ---->| A->1 | +-------+ : : If, however, a read barrier were to be placed between the load of B and the load of A on CPU 2: CPU 1 CPU 2 ======================= ======================= { A = 0, B = 9 } STORE A=1 <write barrier> STORE B=2 LOAD B <read barrier> LOAD A then the partial ordering imposed by CPU 1 will be perceived correctly by CPU 2: +-------+ : : : : | | +------+ +-------+ | |------>| A=1 |------ --->| A->0 | | | +------+ \ +-------+ | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | | | +------+ | +-------+ | |------>| B=2 |--- | : : | | +------+ \ | : : +-------+ +-------+ : : \ | +-------+ | | ---------->| B->2 |------>| | | +-------+ | CPU 2 | | : : | | | : : | | At this point the read ----> \ rrrrrrrrrrrrrrrrr | | barrier causes all effects \ +-------+ | | prior to the storage of B ---->| A->1 |------>| | to be perceptible to CPU 2 +-------+ | | : : +-------+ To illustrate this more completely, consider what could happen if the code contained a load of A either side of the read barrier: CPU 1 CPU 2 ======================= ======================= { A = 0, B = 9 } STORE A=1 <write barrier> STORE B=2 LOAD B LOAD A [first load of A] <read barrier> LOAD A [second load of A] Even though the two loads of A both occur after the load of B, they may both come up with different values: +-------+ : : : : | | +------+ +-------+ | |------>| A=1 |------ --->| A->0 | | | +------+ \ +-------+ | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | | | +------+ | +-------+ | |------>| B=2 |--- | : : | | +------+ \ | : : +-------+ +-------+ : : \ | +-------+ | | ---------->| B->2 |------>| | | +-------+ | CPU 2 | | : : | | | : : | | | +-------+ | | | | A->0 |------>| 1st | | +-------+ | | At this point the read ----> \ rrrrrrrrrrrrrrrrr | | barrier causes all effects \ +-------+ | | prior to the storage of B ---->| A->1 |------>| 2nd | to be perceptible to CPU 2 +-------+ | | : : +-------+ But it may be that the update to A from CPU 1 becomes perceptible to CPU 2 before the read barrier completes anyway: +-------+ : : : : | | +------+ +-------+ | |------>| A=1 |------ --->| A->0 | | | +------+ \ +-------+ | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | | | +------+ | +-------+ | |------>| B=2 |--- | : : | | +------+ \ | : : +-------+ +-------+ : : \ | +-------+ | | ---------->| B->2 |------>| | | +-------+ | CPU 2 | | : : | | \ : : | | \ +-------+ | | ---->| A->1 |------>| 1st | +-------+ | | rrrrrrrrrrrrrrrrr | | +-------+ | | | A->1 |------>| 2nd | +-------+ | | : : +-------+ The guarantee is that the second load will always come up with A == 1 if the load of B came up with B == 2. No such guarantee exists for the first load of A; that may come up with either A == 0 or A == 1. READ MEMORY BARRIERS VS LOAD SPECULATION ---------------------------------------- Many CPUs speculate with loads: that is they see that they will need to load an item from memory, and they find a time where they're not using the bus for any other loads, and so do the load in advance - even though they haven't actually got to that point in the instruction execution flow yet. This permits the actual load instruction to potentially complete immediately because the CPU already has the value to hand. It may turn out that the CPU didn't actually need the value - perhaps because a branch circumvented the load - in which case it can discard the value or just cache it for later use. Consider: CPU 1 CPU 2 ======================= ======================= LOAD B DIVIDE } Divide instructions generally DIVIDE } take a long time to perform LOAD A Which might appear as this: : : +-------+ +-------+ | | --->| B->2 |------>| | +-------+ | CPU 2 | : :DIVIDE | | +-------+ | | The CPU being busy doing a ---> --->| A->0 |~~~~ | | division speculates on the +-------+ ~ | | LOAD of A : : ~ | | : :DIVIDE | | : : ~ | | Once the divisions are complete --> : : ~-->| | the CPU can then perform the : : | | LOAD with immediate effect : : +-------+ Placing a read barrier or a data dependency barrier just before the second load: CPU 1 CPU 2 ======================= ======================= LOAD B DIVIDE DIVIDE <read barrier> LOAD A will force any value speculatively obtained to be reconsidered to an extent dependent on the type of barrier used. If there was no change made to the speculated memory location, then the speculated value will just be used: : : +-------+ +-------+ | | --->| B->2 |------>| | +-------+ | CPU 2 | : :DIVIDE | | +-------+ | | The CPU being busy doing a ---> --->| A->0 |~~~~ | | division speculates on the +-------+ ~ | | LOAD of A : : ~ | | : :DIVIDE | | : : ~ | | : : ~ | | rrrrrrrrrrrrrrrr~ | | : : ~ | | : : ~-->| | : : | | : : +-------+ but if there was an update or an invalidation from another CPU pending, then the speculation will be cancelled and the value reloaded: : : +-------+ +-------+ | | --->| B->2 |------>| | +-------+ | CPU 2 | : :DIVIDE | | +-------+ | | The CPU being busy doing a ---> --->| A->0 |~~~~ | | division speculates on the +-------+ ~ | | LOAD of A : : ~ | | : :DIVIDE | | : : ~ | | : : ~ | | rrrrrrrrrrrrrrrrr | | +-------+ | | The speculation is discarded ---> --->| A->1 |------>| | and an updated value is +-------+ | | retrieved : : +-------+ MULTICOPY ATOMICITY -------------------- Multicopy atomicity is a deeply intuitive notion about ordering that is not always provided by real computer systems, namely that a given store becomes visible at the same time to all CPUs, or, alternatively, that all CPUs agree on the order in which all stores become visible. However, support of full multicopy atomicity would rule out valuable hardware optimizations, so a weaker form called ``other multicopy atomicity'' instead guarantees only that a given store becomes visible at the same time to all -other- CPUs. The remainder of this document discusses this weaker form, but for brevity will call it simply ``multicopy atomicity''. The following example demonstrates multicopy atomicity: CPU 1 CPU 2 CPU 3 ======================= ======================= ======================= { X = 0, Y = 0 } STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1) <general barrier> <read barrier> STORE Y=r1 LOAD X Suppose that CPU 2's load from X returns 1, which it then stores to Y, and CPU 3's load from Y returns 1. This indicates that CPU 1's store to X precedes CPU 2's load from X and that CPU 2's store to Y precedes CPU 3's load from Y. In addition, the memory barriers guarantee that CPU 2 executes its load before its store, and CPU 3 loads from Y before it loads from X. The question is then "Can CPU 3's load from X return 0?" Because CPU 3's load from X in some sense comes after CPU 2's load, it is natural to expect that CPU 3's load from X must therefore return 1. This expectation follows from multicopy atomicity: if a load executing on CPU B follows a load from the same variable executing on CPU A (and CPU A did not originally store the value which it read), then on multicopy-atomic systems, CPU B's load must return either the same value that CPU A's load did or some later value. However, the Linux kernel does not require systems to be multicopy atomic. The use of a general memory barrier in the example above compensates for any lack of multicopy atomicity. In the example, if CPU 2's load from X returns 1 and CPU 3's load from Y returns 1, then CPU 3's load from X must indeed also return 1. However, dependencies, read barriers, and write barriers are not always able to compensate for non-multicopy atomicity. For example, suppose that CPU 2's general barrier is removed from the above example, leaving only the data dependency shown below: CPU 1 CPU 2 CPU 3 ======================= ======================= ======================= { X = 0, Y = 0 } STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1) <data dependency> <read barrier> STORE Y=r1 LOAD X (reads 0) This substitution allows non-multicopy atomicity to run rampant: in this example, it is perfectly legal for CPU 2's load from X to return 1, CPU 3's load from Y to return 1, and its load from X to return 0. The key point is that although CPU 2's data dependency orders its load and store, it does not guarantee to order CPU 1's store. Thus, if this example runs on a non-multicopy-atomic system where CPUs 1 and 2 share a store buffer or a level of cache, CPU 2 might have early access to CPU 1's writes. General barriers are therefore required to ensure that all CPUs agree on the combined order of multiple accesses. General barriers can compensate not only for non-multicopy atomicity, but can also generate additional ordering that can ensure that -all- CPUs will perceive the same order of -all- operations. In contrast, a chain of release-acquire pairs do not provide this additional ordering, which means that only those CPUs on the chain are guaranteed to agree on the combined order of the accesses. For example, switching to C code in deference to the ghost of Herman Hollerith: int u, v, x, y, z; void cpu0(void) { r0 = smp_load_acquire(&x); WRITE_ONCE(u, 1); smp_store_release(&y, 1); } void cpu1(void) { r1 = smp_load_acquire(&y); r4 = READ_ONCE(v); r5 = READ_ONCE(u); smp_store_release(&z, 1); } void cpu2(void) { r2 = smp_load_acquire(&z); smp_store_release(&x, 1); } void cpu3(void) { WRITE_ONCE(v, 1); smp_mb(); r3 = READ_ONCE(u); } Because cpu0(), cpu1(), and cpu2() participate in a chain of smp_store_release()/smp_load_acquire() pairs, the following outcome is prohibited: r0 == 1 && r1 == 1 && r2 == 1 Furthermore, because of the release-acquire relationship between cpu0() and cpu1(), cpu1() must see cpu0()'s writes, so that the following outcome is prohibited: r1 == 1 && r5 == 0 However, the ordering provided by a release-acquire chain is local to the CPUs participating in that chain and does not apply to cpu3(), at least aside from stores. Therefore, the following outcome is possible: r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 As an aside, the following outcome is also possible: r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1 Although cpu0(), cpu1(), and cpu2() will see their respective reads and writes in order, CPUs not involved in the release-acquire chain might well disagree on the order. This disagreement stems from the fact that the weak memory-barrier instructions used to implement smp_load_acquire() and smp_store_release() are not required to order prior stores against subsequent loads in all cases. This means that cpu3() can see cpu0()'s store to u as happening -after- cpu1()'s load from v, even though both cpu0() and cpu1() agree that these two operations occurred in the intended order. However, please keep in mind that smp_load_acquire() is not magic. In particular, it simply reads from its argument with ordering. It does -not- ensure that any particular value will be read. Therefore, the following outcome is possible: r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0 Note that this outcome can happen even on a mythical sequentially consistent system where nothing is ever reordered. To reiterate, if your code requires full ordering of all operations, use general barriers throughout. ======================== EXPLICIT KERNEL BARRIERS ======================== The Linux kernel has a variety of different barriers that act at different levels: (*) Compiler barrier. (*) CPU memory barriers. COMPILER BARRIER ---------------- The Linux kernel has an explicit compiler barrier function that prevents the compiler from moving the memory accesses either side of it to the other side: barrier(); This is a general barrier -- there are no read-read or write-write variants of barrier(). However, READ_ONCE() and WRITE_ONCE() can be thought of as weak forms of barrier() that affect only the specific accesses flagged by the READ_ONCE() or WRITE_ONCE(). The barrier() function has the following effects: (*) Prevents the compiler from reordering accesses following the barrier() to precede any accesses preceding the barrier(). One example use for this property is to ease communication between interrupt-handler code and the code that was interrupted. (*) Within a loop, forces the compiler to load the variables used in that loop's conditional on each pass through that loop. The READ_ONCE() and WRITE_ONCE() functions can prevent any number of optimizations that, while perfectly safe in single-threaded code, can be fatal in concurrent code. Here are some examples of these sorts of optimizations: (*) The compiler is within its rights to reorder loads and stores to the same variable, and in some cases, the CPU is within its rights to reorder loads to the same variable. This means that the following code: a[0] = x; a[1] = x; Might result in an older value of x stored in a[1] than in a[0]. Prevent both the compiler and the CPU from doing this as follows: a[0] = READ_ONCE(x); a[1] = READ_ONCE(x); In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for accesses from multiple CPUs to a single variable. (*) The compiler is within its rights to merge successive loads from the same variable. Such merging can cause the compiler to "optimize" the following code: while (tmp = a) do_something_with(tmp); into the following code, which, although in some sense legitimate for single-threaded code, is almost certainly not what the developer intended: if (tmp = a) for (;;) do_something_with(tmp); Use READ_ONCE() to prevent the compiler from doing this to you: while (tmp = READ_ONCE(a)) do_something_with(tmp); (*) The compiler is within its rights to reload a variable, for example, in cases where high register pressure prevents the compiler from keeping all data of interest in registers. The compiler might therefore optimize the variable 'tmp' out of our previous example: while (tmp = a) do_something_with(tmp); This could result in the following code, which is perfectly safe in single-threaded code, but can be fatal in concurrent code: while (a) do_something_with(a); For example, the optimized version of this code could result in passing a zero to do_something_with() in the case where the variable a was modified by some other CPU between the "while" statement and the call to do_something_with(). Again, use READ_ONCE() to prevent the compiler from doing this: while (tmp = READ_ONCE(a)) do_something_with(tmp); Note that if the compiler runs short of registers, it might save tmp onto the stack. The overhead of this saving and later restoring is why compilers reload variables. Doing so is perfectly safe for single-threaded code, so you need to tell the compiler about cases where it is not safe. (*) The compiler is within its rights to omit a load entirely if it knows what the value will be. For example, if the compiler can prove that the value of variable 'a' is always zero, it can optimize this code: while (tmp = a) do_something_with(tmp); Into this: do { } while (0); This transformation is a win for single-threaded code because it gets rid of a load and a branch. The problem is that the compiler will carry out its proof assuming that the current CPU is the only one updating variable 'a'. If variable 'a' is shared, then the compiler's proof will be erroneous. Use READ_ONCE() to tell the compiler that it doesn't know as much as it thinks it does: while (tmp = READ_ONCE(a)) do_something_with(tmp); But please note that the compiler is also closely watching what you do with the value after the READ_ONCE(). For example, suppose you do the following and MAX is a preprocessor macro with the value 1: while ((tmp = READ_ONCE(a)) % MAX) do_something_with(tmp); Then the compiler knows that the result of the "%" operator applied to MAX will always be zero, again allowing the compiler to optimize the code into near-nonexistence. (It will still load from the variable 'a'.) (*) Similarly, the compiler is within its rights to omit a store entirely if it knows that the variable already has the value being stored. Again, the compiler assumes that the current CPU is the only one storing into the variable, which can cause the compiler to do the wrong thing for shared variables. For example, suppose you have the following: a = 0; ... Code that does not store to variable a ... a = 0; The compiler sees that the value of variable 'a' is already zero, so it might well omit the second store. This would come as a fatal surprise if some other CPU might have stored to variable 'a' in the meantime. Use WRITE_ONCE() to prevent the compiler from making this sort of wrong guess: WRITE_ONCE(a, 0); ... Code that does not store to variable a ... WRITE_ONCE(a, 0); (*) The compiler is within its rights to reorder memory accesses unless you tell it not to. For example, consider the following interaction between process-level code and an interrupt handler: void process_level(void) { msg = get_message(); flag = true; } void interrupt_handler(void) { if (flag) process_message(msg); } There is nothing to prevent the compiler from transforming process_level() to the following, in fact, this might well be a win for single-threaded code: void process_level(void) { flag = true; msg = get_message(); } If the interrupt occurs between these two statement, then interrupt_handler() might be passed a garbled msg. Use WRITE_ONCE() to prevent this as follows: void process_level(void) { WRITE_ONCE(msg, get_message()); WRITE_ONCE(flag, true); } void interrupt_handler(void) { if (READ_ONCE(flag)) process_message(READ_ONCE(msg)); } Note that the READ_ONCE() and WRITE_ONCE() wrappers in interrupt_handler() are needed if this interrupt handler can itself be interrupted by something that also accesses 'flag' and 'msg', for example, a nested interrupt or an NMI. Otherwise, READ_ONCE() and WRITE_ONCE() are not needed in interrupt_handler() other than for documentation purposes. (Note also that nested interrupts do not typically occur in modern Linux kernels, in fact, if an interrupt handler returns with interrupts enabled, you will get a WARN_ONCE() splat.) You should assume that the compiler can move READ_ONCE() and WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(), barrier(), or similar primitives. This effect could also be achieved using barrier(), but READ_ONCE() and WRITE_ONCE() are more selective: With READ_ONCE() and WRITE_ONCE(), the compiler need only forget the contents of the indicated memory locations, while with barrier() the compiler must discard the value of all memory locations that it has currently cached in any machine registers. Of course, the compiler must also respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur, though the CPU of course need not do so. (*) The compiler is within its rights to invent stores to a variable, as in the following example: if (a) b = a; else b = 42; The compiler might save a branch by optimizing this as follows: b = 42; if (a) b = a; In single-threaded code, this is not only safe, but also saves a branch. Unfortunately, in concurrent code, this optimization could cause some other CPU to see a spurious value of 42 -- even if variable 'a' was never zero -- when loading variable 'b'. Use WRITE_ONCE() to prevent this as follows: if (a) WRITE_ONCE(b, a); else WRITE_ONCE(b, 42); The compiler can also invent loads. These are usually less damaging, but they can result in cache-line bouncing and thus in poor performance and scalability. Use READ_ONCE() to prevent invented loads. (*) For aligned memory locations whose size allows them to be accessed with a single memory-reference instruction, prevents "load tearing" and "store tearing," in which a single large access is replaced by multiple smaller accesses. For example, given an architecture having 16-bit store instructions with 7-bit immediate fields, the compiler might be tempted to use two 16-bit store-immediate instructions to implement the following 32-bit store: p = 0x00010002; Please note that GCC really does use this sort of optimization, which is not surprising given that it would likely take more than two instructions to build the constant and then store it. This optimization can therefore be a win in single-threaded code. In fact, a recent bug (since fixed) caused GCC to incorrectly use this optimization in a volatile store. In the absence of such bugs, use of WRITE_ONCE() prevents store tearing in the following example: WRITE_ONCE(p, 0x00010002); Use of packed structures can also result in load and store tearing, as in this example: struct __attribute__((__packed__)) foo { short a; int b; short c; }; struct foo foo1, foo2; ... foo2.a = foo1.a; foo2.b = foo1.b; foo2.c = foo1.c; Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no volatile markings, the compiler would be well within its rights to implement these three assignment statements as a pair of 32-bit loads followed by a pair of 32-bit stores. This would result in load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE() and WRITE_ONCE() again prevent tearing in this example: foo2.a = foo1.a; WRITE_ONCE(foo2.b, READ_ONCE(foo1.b)); foo2.c = foo1.c; All that aside, it is never necessary to use READ_ONCE() and WRITE_ONCE() on a variable that has been marked volatile. For example, because 'jiffies' is marked volatile, it is never necessary to say READ_ONCE(jiffies). The reason for this is that READ_ONCE() and WRITE_ONCE() are implemented as volatile casts, which has no effect when its argument is already marked volatile. Please note that these compiler barriers have no direct effect on the CPU, which may then reorder things however it wishes. CPU MEMORY BARRIERS ------------------- The Linux kernel has eight basic CPU memory barriers: TYPE MANDATORY SMP CONDITIONAL =============== ======================= =========================== GENERAL mb() smp_mb() WRITE wmb() smp_wmb() READ rmb() smp_rmb() DATA DEPENDENCY READ_ONCE() All memory barriers except the data dependency barriers imply a compiler barrier. Data dependencies do not impose any additional compiler ordering. Aside: In the case of data dependencies, the compiler would be expected to issue the loads in the correct order (eg. `a[b]` would have to load the value of b before loading a[b]), however there is no guarantee in the C specification that the compiler may not speculate the value of b (eg. is equal to 1) and load a[b] before b (eg. tmp = a[1]; if (b != 1) tmp = a[b]; ). There is also the problem of a compiler reloading b after having loaded a[b], thus having a newer copy of b than a[b]. A consensus has not yet been reached about these problems, however the READ_ONCE() macro is a good place to start looking. SMP memory barriers are reduced to compiler barriers on uniprocessor compiled systems because it is assumed that a CPU will appear to be self-consistent, and will order overlapping accesses correctly with respect to itself. However, see the subsection on "Virtual Machine Guests" below. [!] Note that SMP memory barriers _must_ be used to control the ordering of references to shared memory on SMP systems, though the use of locking instead is sufficient. Mandatory barriers should not be used to control SMP effects, since mandatory barriers impose unnecessary overhead on both SMP and UP systems. They may, however, be used to control MMIO effects on accesses through relaxed memory I/O windows. These barriers are required even on non-SMP systems as they affect the order in which memory operations appear to a device by prohibiting both the compiler and the CPU from reordering them. There are some more advanced barrier functions: (*) smp_store_mb(var, value) This assigns the value to the variable and then inserts a full memory barrier after it. It isn't guaranteed to insert anything more than a compiler barrier in a UP compilation. (*) smp_mb__before_atomic(); (*) smp_mb__after_atomic(); These are for use with atomic RMW functions that do not imply memory barriers, but where the code needs a memory barrier. Examples for atomic RMW functions that do not imply are memory barrier are e.g. add, subtract, (failed) conditional operations, _relaxed functions, but not atomic_read or atomic_set. A common example where a memory barrier may be required is when atomic ops are used for reference counting. These are also used for atomic RMW bitop functions that do not imply a memory barrier (such as set_bit and clear_bit). As an example, consider a piece of code that marks an object as being dead and then decrements the object's reference count: obj->dead = 1; smp_mb__before_atomic(); atomic_dec(&obj->ref_count); This makes sure that the death mark on the object is perceived to be set *before* the reference counter is decremented. See Documentation/atomic_{t,bitops}.txt for more information. (*) dma_wmb(); (*) dma_rmb(); These are for use with consistent memory to guarantee the ordering of writes or reads of shared memory accessible to both the CPU and a DMA capable device. For example, consider a device driver that shares memory with a device and uses a descriptor status value to indicate if the descriptor belongs to the device or the CPU, and a doorbell to notify it when new descriptors are available: if (desc->status != DEVICE_OWN) { /* do not read data until we own descriptor */ dma_rmb(); /* read/modify data */ read_data = desc->data; desc->data = write_data; /* flush modifications before status update */ dma_wmb(); /* assign ownership */ desc->status = DEVICE_OWN; /* notify device of new descriptors */ writel(DESC_NOTIFY, doorbell); } The dma_rmb() allows us guarantee the device has released ownership before we read the data from the descriptor, and the dma_wmb() allows us to guarantee the data is written to the descriptor before the device can see it now has ownership. Note that, when using writel(), a prior wmb() is not needed to guarantee that the cache coherent memory writes have completed before writing to the MMIO region. The cheaper writel_relaxed() does not provide this guarantee and must not be used here. See the subsection "Kernel I/O barrier effects" for more information on relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for more information on consistent memory. (*) pmem_wmb(); This is for use with persistent memory to ensure that stores for which modifications are written to persistent storage reached a platform durability domain. For example, after a non-temporal write to pmem region, we use pmem_wmb() to ensure that stores have reached a platform durability domain. This ensures that stores have updated persistent storage before any data access or data transfer caused by subsequent instructions is initiated. This is in addition to the ordering done by wmb(). For load from persistent memory, existing read memory barriers are sufficient to ensure read ordering. =============================== IMPLICIT KERNEL MEMORY BARRIERS =============================== Some of the other functions in the linux kernel imply memory barriers, amongst which are locking and scheduling functions. This specification is a _minimum_ guarantee; any particular architecture may provide more substantial guarantees, but these may not be relied upon outside of arch specific code. LOCK ACQUISITION FUNCTIONS -------------------------- The Linux kernel has a number of locking constructs: (*) spin locks (*) R/W spin locks (*) mutexes (*) semaphores (*) R/W semaphores In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations for each construct. These operations all imply certain barriers: (1) ACQUIRE operation implication: Memory operations issued after the ACQUIRE will be completed after the ACQUIRE operation has completed. Memory operations issued before the ACQUIRE may be completed after the ACQUIRE operation has completed. (2) RELEASE operation implication: Memory operations issued before the RELEASE will be completed before the RELEASE operation has completed. Memory operations issued after the RELEASE may be completed before the RELEASE operation has completed. (3) ACQUIRE vs ACQUIRE implication: All ACQUIRE operations issued before another ACQUIRE operation will be completed before that ACQUIRE operation. (4) ACQUIRE vs RELEASE implication: All ACQUIRE operations issued before a RELEASE operation will be completed before the RELEASE operation. (5) Failed conditional ACQUIRE implication: Certain locking variants of the ACQUIRE operation may fail, either due to being unable to get the lock immediately, or due to receiving an unblocked signal while asleep waiting for the lock to become available. Failed locks do not imply any sort of barrier. [!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only one-way barriers is that the effects of instructions outside of a critical section may seep into the inside of the critical section. An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier because it is possible for an access preceding the ACQUIRE to happen after the ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and the two accesses can themselves then cross: *A = a; ACQUIRE M RELEASE M *B = b; may occur as: ACQUIRE M, STORE *B, STORE *A, RELEASE M When the ACQUIRE and RELEASE are a lock acquisition and release, respectively, this same reordering can occur if the lock's ACQUIRE and RELEASE are to the same lock variable, but only from the perspective of another CPU not holding that lock. In short, a ACQUIRE followed by an RELEASE may -not- be assumed to be a full memory barrier. Similarly, the reverse case of a RELEASE followed by an ACQUIRE does not imply a full memory barrier. Therefore, the CPU's execution of the critical sections corresponding to the RELEASE and the ACQUIRE can cross, so that: *A = a; RELEASE M ACQUIRE N *B = b; could occur as: ACQUIRE N, STORE *B, STORE *A, RELEASE M It might appear that this reordering could introduce a deadlock. However, this cannot happen because if such a deadlock threatened, the RELEASE would simply complete, thereby avoiding the deadlock. Why does this work? One key point is that we are only talking about the CPU doing the reordering, not the compiler. If the compiler (or, for that matter, the developer) switched the operations, deadlock -could- occur. But suppose the CPU reordered the operations. In this case, the unlock precedes the lock in the assembly code. The CPU simply elected to try executing the later lock operation first. If there is a deadlock, this lock operation will simply spin (or try to sleep, but more on that later). The CPU will eventually execute the unlock operation (which preceded the lock operation in the assembly code), which will unravel the potential deadlock, allowing the lock operation to succeed. But what if the lock is a sleeplock? In that case, the code will try to enter the scheduler, where it will eventually encounter a memory barrier, which will force the earlier unlock operation to complete, again unraveling the deadlock. There might be a sleep-unlock race, but the locking primitive needs to resolve such races properly in any case. Locks and semaphores may not provide any guarantee of ordering on UP compiled systems, and so cannot be counted on in such a situation to actually achieve anything at all - especially with respect to I/O accesses - unless combined with interrupt disabling operations. See also the section on "Inter-CPU acquiring barrier effects". As an example, consider the following: *A = a; *B = b; ACQUIRE *C = c; *D = d; RELEASE *E = e; *F = f; The following sequence of events is acceptable: ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE [+] Note that {*F,*A} indicates a combined access. But none of the following are: {*F,*A}, *B, ACQUIRE, *C, *D, RELEASE, *E *A, *B, *C, ACQUIRE, *D, RELEASE, *E, *F *A, *B, ACQUIRE, *C, RELEASE, *D, *E, *F *B, ACQUIRE, *C, *D, RELEASE, {*F,*A}, *E INTERRUPT DISABLING FUNCTIONS ----------------------------- Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts (RELEASE equivalent) will act as compiler barriers only. So if memory or I/O barriers are required in such a situation, they must be provided from some other means. SLEEP AND WAKE-UP FUNCTIONS --------------------------- Sleeping and waking on an event flagged in global data can be viewed as an interaction between two pieces of data: the task state of the task waiting for the event and the global data used to indicate the event. To make sure that these appear to happen in the right order, the primitives to begin the process of going to sleep, and the primitives to initiate a wake up imply certain barriers. Firstly, the sleeper normally follows something like this sequence of events: for (;;) { set_current_state(TASK_UNINTERRUPTIBLE); if (event_indicated) break; schedule(); } A general memory barrier is interpolated automatically by set_current_state() after it has altered the task state: CPU 1 =============================== set_current_state(); smp_store_mb(); STORE current->state <general barrier> LOAD event_indicated set_current_state() may be wrapped by: prepare_to_wait(); prepare_to_wait_exclusive(); which therefore also imply a general memory barrier after setting the state. The whole sequence above is available in various canned forms, all of which interpolate the memory barrier in the right place: wait_event(); wait_event_interruptible(); wait_event_interruptible_exclusive(); wait_event_interruptible_timeout(); wait_event_killable(); wait_event_timeout(); wait_on_bit(); wait_on_bit_lock(); Secondly, code that performs a wake up normally follows something like this: event_indicated = 1; wake_up(&event_wait_queue); or: event_indicated = 1; wake_up_process(event_daemon); A general memory barrier is executed by wake_up() if it wakes something up. If it doesn't wake anything up then a memory barrier may or may not be executed; you must not rely on it. The barrier occurs before the task state is accessed, in particular, it sits between the STORE to indicate the event and the STORE to set TASK_RUNNING: CPU 1 (Sleeper) CPU 2 (Waker) =============================== =============================== set_current_state(); STORE event_indicated smp_store_mb(); wake_up(); STORE current->state ... <general barrier> <general barrier> LOAD event_indicated if ((LOAD task->state) & TASK_NORMAL) STORE task->state where "task" is the thread being woken up and it equals CPU 1's "current". To repeat, a general memory barrier is guaranteed to be executed by wake_up() if something is actually awakened, but otherwise there is no such guarantee. To see this, consider the following sequence of events, where X and Y are both initially zero: CPU 1 CPU 2 =============================== =============================== X = 1; Y = 1; smp_mb(); wake_up(); LOAD Y LOAD X If a wakeup does occur, one (at least) of the two loads must see 1. If, on the other hand, a wakeup does not occur, both loads might see 0. wake_up_process() always executes a general memory barrier. The barrier again occurs before the task state is accessed. In particular, if the wake_up() in the previous snippet were replaced by a call to wake_up_process() then one of the two loads would be guaranteed to see 1. The available waker functions include: complete(); wake_up(); wake_up_all(); wake_up_bit(); wake_up_interruptible(); wake_up_interruptible_all(); wake_up_interruptible_nr(); wake_up_interruptible_poll(); wake_up_interruptible_sync(); wake_up_interruptible_sync_poll(); wake_up_locked(); wake_up_locked_poll(); wake_up_nr(); wake_up_poll(); wake_up_process(); In terms of memory ordering, these functions all provide the same guarantees of a wake_up() (or stronger). [!] Note that the memory barriers implied by the sleeper and the waker do _not_ order multiple stores before the wake-up with respect to loads of those stored values after the sleeper has called set_current_state(). For instance, if the sleeper does: set_current_state(TASK_INTERRUPTIBLE); if (event_indicated) break; __set_current_state(TASK_RUNNING); do_something(my_data); and the waker does: my_data = value; event_indicated = 1; wake_up(&event_wait_queue); there's no guarantee that the change to event_indicated will be perceived by the sleeper as coming after the change to my_data. In such a circumstance, the code on both sides must interpolate its own memory barriers between the separate data accesses. Thus the above sleeper ought to do: set_current_state(TASK_INTERRUPTIBLE); if (event_indicated) { smp_rmb(); do_something(my_data); } and the waker should do: my_data = value; smp_wmb(); event_indicated = 1; wake_up(&event_wait_queue); MISCELLANEOUS FUNCTIONS ----------------------- Other functions that imply barriers: (*) schedule() and similar imply full memory barriers. =================================== INTER-CPU ACQUIRING BARRIER EFFECTS =================================== On SMP systems locking primitives give a more substantial form of barrier: one that does affect memory access ordering on other CPUs, within the context of conflict on any particular lock. ACQUIRES VS MEMORY ACCESSES --------------------------- Consider the following: the system has a pair of spinlocks (M) and (Q), and three CPUs; then should the following sequence of events occur: CPU 1 CPU 2 =============================== =============================== WRITE_ONCE(*A, a); WRITE_ONCE(*E, e); ACQUIRE M ACQUIRE Q WRITE_ONCE(*B, b); WRITE_ONCE(*F, f); WRITE_ONCE(*C, c); WRITE_ONCE(*G, g); RELEASE M RELEASE Q WRITE_ONCE(*D, d); WRITE_ONCE(*H, h); Then there is no guarantee as to what order CPU 3 will see the accesses to *A through *H occur in, other than the constraints imposed by the separate locks on the separate CPUs. It might, for example, see: *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M But it won't see any of: *B, *C or *D preceding ACQUIRE M *A, *B or *C following RELEASE M *F, *G or *H preceding ACQUIRE Q *E, *F or *G following RELEASE Q ================================= WHERE ARE MEMORY BARRIERS NEEDED? ================================= Under normal operation, memory operation reordering is generally not going to be a problem as a single-threaded linear piece of code will still appear to work correctly, even if it's in an SMP kernel. There are, however, four circumstances in which reordering definitely _could_ be a problem: (*) Interprocessor interaction. (*) Atomic operations. (*) Accessing devices. (*) Interrupts. INTERPROCESSOR INTERACTION -------------------------- When there's a system with more than one processor, more than one CPU in the system may be working on the same data set at the same time. This can cause synchronisation problems, and the usual way of dealing with them is to use locks. Locks, however, are quite expensive, and so it may be preferable to operate without the use of a lock if at all possible. In such a case operations that affect both CPUs may have to be carefully ordered to prevent a malfunction. Consider, for example, the R/W semaphore slow path. Here a waiting process is queued on the semaphore, by virtue of it having a piece of its stack linked to the semaphore's list of waiting processes: struct rw_semaphore { ... spinlock_t lock; struct list_head waiters; }; struct rwsem_waiter { struct list_head list; struct task_struct *task; }; To wake up a particular waiter, the up_read() or up_write() functions have to: (1) read the next pointer from this waiter's record to know as to where the next waiter record is; (2) read the pointer to the waiter's task structure; (3) clear the task pointer to tell the waiter it has been given the semaphore; (4) call wake_up_process() on the task; and (5) release the reference held on the waiter's task struct. In other words, it has to perform this sequence of events: LOAD waiter->list.next; LOAD waiter->task; STORE waiter->task; CALL wakeup RELEASE task and if any of these steps occur out of order, then the whole thing may malfunction. Once it has queued itself and dropped the semaphore lock, the waiter does not get the lock again; it instead just waits for its task pointer to be cleared before proceeding. Since the record is on the waiter's stack, this means that if the task pointer is cleared _before_ the next pointer in the list is read, another CPU might start processing the waiter and might clobber the waiter's stack before the up*() function has a chance to read the next pointer. Consider then what might happen to the above sequence of events: CPU 1 CPU 2 =============================== =============================== down_xxx() Queue waiter Sleep up_yyy() LOAD waiter->task; STORE waiter->task; Woken up by other event <preempt> Resume processing down_xxx() returns call foo() foo() clobbers *waiter </preempt> LOAD waiter->list.next; --- OOPS --- This could be dealt with using the semaphore lock, but then the down_xxx() function has to needlessly get the spinlock again after being woken up. The way to deal with this is to insert a general SMP memory barrier: LOAD waiter->list.next; LOAD waiter->task; smp_mb(); STORE waiter->task; CALL wakeup RELEASE task In this case, the barrier makes a guarantee that all memory accesses before the barrier will appear to happen before all the memory accesses after the barrier with respect to the other CPUs on the system. It does _not_ guarantee that all the memory accesses before the barrier will be complete by the time the barrier instruction itself is complete. On a UP system - where this wouldn't be a problem - the smp_mb() is just a compiler barrier, thus making sure the compiler emits the instructions in the right order without actually intervening in the CPU. Since there's only one CPU, that CPU's dependency ordering logic will take care of everything else. ATOMIC OPERATIONS ----------------- While they are technically interprocessor interaction considerations, atomic operations are noted specially as some of them imply full memory barriers and some don't, but they're very heavily relied on as a group throughout the kernel. See Documentation/atomic_t.txt for more information. ACCESSING DEVICES ----------------- Many devices can be memory mapped, and so appear to the CPU as if they're just a set of memory locations. To control such a device, the driver usually has to make the right memory accesses in exactly the right order. However, having a clever CPU or a clever compiler creates a potential problem in that the carefully sequenced accesses in the driver code won't reach the device in the requisite order if the CPU or the compiler thinks it is more efficient to reorder, combine or merge accesses - something that would cause the device to malfunction. Inside of the Linux kernel, I/O should be done through the appropriate accessor routines - such as inb() or writel() - which know how to make such accesses appropriately sequential. While this, for the most part, renders the explicit use of memory barriers unnecessary, if the accessor functions are used to refer to an I/O memory window with relaxed memory access properties, then _mandatory_ memory barriers are required to enforce ordering. See Documentation/driver-api/device-io.rst for more information. INTERRUPTS ---------- A driver may be interrupted by its own interrupt service routine, and thus the two parts of the driver may interfere with each other's attempts to control or access the device. This may be alleviated - at least in part - by disabling local interrupts (a form of locking), such that the critical operations are all contained within the interrupt-disabled section in the driver. While the driver's interrupt routine is executing, the driver's core may not run on the same CPU, and its interrupt is not permitted to happen again until the current interrupt has been handled, thus the interrupt handler does not need to lock against that. However, consider a driver that was talking to an ethernet card that sports an address register and a data register. If that driver's core talks to the card under interrupt-disablement and then the driver's interrupt handler is invoked: LOCAL IRQ DISABLE writew(ADDR, 3); writew(DATA, y); LOCAL IRQ ENABLE <interrupt> writew(ADDR, 4); q = readw(DATA); </interrupt> The store to the data register might happen after the second store to the address register if ordering rules are sufficiently relaxed: STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA If ordering rules are relaxed, it must be assumed that accesses done inside an interrupt disabled section may leak outside of it and may interleave with accesses performed in an interrupt - and vice versa - unless implicit or explicit barriers are used. Normally this won't be a problem because the I/O accesses done inside such sections will include synchronous load operations on strictly ordered I/O registers that form implicit I/O barriers. A similar situation may occur between an interrupt routine and two routines running on separate CPUs that communicate with each other. If such a case is likely, then interrupt-disabling locks should be used to guarantee ordering. ========================== KERNEL I/O BARRIER EFFECTS ========================== Interfacing with peripherals via I/O accesses is deeply architecture and device specific. Therefore, drivers which are inherently non-portable may rely on specific behaviours of their target systems in order to achieve synchronization in the most lightweight manner possible. For drivers intending to be portable between multiple architectures and bus implementations, the kernel offers a series of accessor functions that provide various degrees of ordering guarantees: (*) readX(), writeX(): The readX() and writeX() MMIO accessors take a pointer to the peripheral being accessed as an __iomem * parameter. For pointers mapped with the default I/O attributes (e.g. those returned by ioremap()), the ordering guarantees are as follows: 1. All readX() and writeX() accesses to the same peripheral are ordered with respect to each other. This ensures that MMIO register accesses by the same CPU thread to a particular device will arrive in program order. 2. A writeX() issued by a CPU thread holding a spinlock is ordered before a writeX() to the same peripheral from another CPU thread issued after a later acquisition of the same spinlock. This ensures that MMIO register writes to a particular device issued while holding a spinlock will arrive in an order consistent with acquisitions of the lock. 3. A writeX() by a CPU thread to the peripheral will first wait for the completion of all prior writes to memory either issued by, or propagated to, the same thread. This ensures that writes by the CPU to an outbound DMA buffer allocated by dma_alloc_coherent() will be visible to a DMA engine when the CPU writes to its MMIO control register to trigger the transfer. 4. A readX() by a CPU thread from the peripheral will complete before any subsequent reads from memory by the same thread can begin. This ensures that reads by the CPU from an incoming DMA buffer allocated by dma_alloc_coherent() will not see stale data after reading from the DMA engine's MMIO status register to establish that the DMA transfer has completed. 5. A readX() by a CPU thread from the peripheral will complete before any subsequent delay() loop can begin execution on the same thread. This ensures that two MMIO register writes by the CPU to a peripheral will arrive at least 1us apart if the first write is immediately read back with readX() and udelay(1) is called prior to the second writeX(): writel(42, DEVICE_REGISTER_0); // Arrives at the device... readl(DEVICE_REGISTER_0); udelay(1); writel(42, DEVICE_REGISTER_1); // ...at least 1us before this. The ordering properties of __iomem pointers obtained with non-default attributes (e.g. those returned by ioremap_wc()) are specific to the underlying architecture and therefore the guarantees listed above cannot generally be relied upon for accesses to these types of mappings. (*) readX_relaxed(), writeX_relaxed(): These are similar to readX() and writeX(), but provide weaker memory ordering guarantees. Specifically, they do not guarantee ordering with respect to locking, normal memory accesses or delay() loops (i.e. bullets 2-5 above) but they are still guaranteed to be ordered with respect to other accesses from the same CPU thread to the same peripheral when operating on __iomem pointers mapped with the default I/O attributes. (*) readsX(), writesX(): The readsX() and writesX() MMIO accessors are designed for accessing register-based, memory-mapped FIFOs residing on peripherals that are not capable of performing DMA. Consequently, they provide only the ordering guarantees of readX_relaxed() and writeX_relaxed(), as documented above. (*) inX(), outX(): The inX() and outX() accessors are intended to access legacy port-mapped I/O peripherals, which may require special instructions on some architectures (notably x86). The port number of the peripheral being accessed is passed as an argument. Since many CPU architectures ultimately access these peripherals via an internal virtual memory mapping, the portable ordering guarantees provided by inX() and outX() are the same as those provided by readX() and writeX() respectively when accessing a mapping with the default I/O attributes. Device drivers may expect outX() to emit a non-posted write transaction that waits for a completion response from the I/O peripheral before returning. This is not guaranteed by all architectures and is therefore not part of the portable ordering semantics. (*) insX(), outsX(): As above, the insX() and outsX() accessors provide the same ordering guarantees as readsX() and writesX() respectively when accessing a mapping with the default I/O attributes. (*) ioreadX(), iowriteX(): These will perform appropriately for the type of access they're actually doing, be it inX()/outX() or readX()/writeX(). With the exception of the string accessors (insX(), outsX(), readsX() and writesX()), all of the above assume that the underlying peripheral is little-endian and will therefore perform byte-swapping operations on big-endian architectures. ======================================== ASSUMED MINIMUM EXECUTION ORDERING MODEL ======================================== It has to be assumed that the conceptual CPU is weakly-ordered but that it will maintain the appearance of program causality with respect to itself. Some CPUs (such as i386 or x86_64) are more constrained than others (such as powerpc or frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside of arch-specific code. This means that it must be considered that the CPU will execute its instruction stream in any order it feels like - or even in parallel - provided that if an instruction in the stream depends on an earlier instruction, then that earlier instruction must be sufficiently complete[*] before the later instruction may proceed; in other words: provided that the appearance of causality is maintained. [*] Some instructions have more than one effect - such as changing the condition codes, changing registers or changing memory - and different instructions may depend on different effects. A CPU may also discard any instruction sequence that winds up having no ultimate effect. For example, if two adjacent instructions both load an immediate value into the same register, the first may be discarded. Similarly, it has to be assumed that compiler might reorder the instruction stream in any way it sees fit, again provided the appearance of causality is maintained. ============================ THE EFFECTS OF THE CPU CACHE ============================ The way cached memory operations are perceived across the system is affected to a certain extent by the caches that lie between CPUs and memory, and by the memory coherence system that maintains the consistency of state in the system. As far as the way a CPU interacts with another part of the system through the caches goes, the memory system has to include the CPU's caches, and memory barriers for the most part act at the interface between the CPU and its cache (memory barriers logically act on the dotted line in the following diagram): <--- CPU ---> : <----------- Memory -----------> : +--------+ +--------+ : +--------+ +-----------+ | | | | : | | | | +--------+ | CPU | | Memory | : | CPU | | | | | | Core |--->| Access |----->| Cache |<-->| | | | | | | Queue | : | | | |--->| Memory | | | | | : | | | | | | +--------+ +--------+ : +--------+ | | | | : | Cache | +--------+ : | Coherency | : | Mechanism | +--------+ +--------+ +--------+ : +--------+ | | | | | | | | : | | | | | | | CPU | | Memory | : | CPU | | |--->| Device | | Core |--->| Access |----->| Cache |<-->| | | | | | | Queue | : | | | | | | | | | | : | | | | +--------+ +--------+ +--------+ : +--------+ +-----------+ : : Although any particular load or store may not actually appear outside of the CPU that issued it since it may have been satisfied within the CPU's own cache, it will still appear as if the full memory access had taken place as far as the other CPUs are concerned since the cache coherency mechanisms will migrate the cacheline over to the accessing CPU and propagate the effects upon conflict. The CPU core may execute instructions in any order it deems fit, provided the expected program causality appears to be maintained. Some of the instructions generate load and store operations which then go into the queue of memory accesses to be performed. The core may place these in the queue in any order it wishes, and continue execution until it is forced to wait for an instruction to complete. What memory barriers are concerned with is controlling the order in which accesses cross from the CPU side of things to the memory side of things, and the order in which the effects are perceived to happen by the other observers in the system. [!] Memory barriers are _not_ needed within a given CPU, as CPUs always see their own loads and stores as if they had happened in program order. [!] MMIO or other device accesses may bypass the cache system. This depends on the properties of the memory window through which devices are accessed and/or the use of any special device communication instructions the CPU may have. CACHE COHERENCY VS DMA ---------------------- Not all systems maintain cache coherency with respect to devices doing DMA. In such cases, a device attempting DMA may obtain stale data from RAM because dirty cache lines may be resident in the caches of various CPUs, and may not have been written back to RAM yet. To deal with this, the appropriate part of the kernel must flush the overlapping bits of cache on each CPU (and maybe invalidate them as well). In addition, the data DMA'd to RAM by a device may be overwritten by dirty cache lines being written back to RAM from a CPU's cache after the device has installed its own data, or cache lines present in the CPU's cache may simply obscure the fact that RAM has been updated, until at such time as the cacheline is discarded from the CPU's cache and reloaded. To deal with this, the appropriate part of the kernel must invalidate the overlapping bits of the cache on each CPU. See Documentation/core-api/cachetlb.rst for more information on cache management. CACHE COHERENCY VS MMIO ----------------------- Memory mapped I/O usually takes place through memory locations that are part of a window in the CPU's memory space that has different properties assigned than the usual RAM directed window. Amongst these properties is usually the fact that such accesses bypass the caching entirely and go directly to the device buses. This means MMIO accesses may, in effect, overtake accesses to cached memory that were emitted earlier. A memory barrier isn't sufficient in such a case, but rather the cache must be flushed between the cached memory write and the MMIO access if the two are in any way dependent. ========================= THE THINGS CPUS GET UP TO ========================= A programmer might take it for granted that the CPU will perform memory operations in exactly the order specified, so that if the CPU is, for example, given the following piece of code to execute: a = READ_ONCE(*A); WRITE_ONCE(*B, b); c = READ_ONCE(*C); d = READ_ONCE(*D); WRITE_ONCE(*E, e); they would then expect that the CPU will complete the memory operation for each instruction before moving on to the next one, leading to a definite sequence of operations as seen by external observers in the system: LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E. Reality is, of course, much messier. With many CPUs and compilers, the above assumption doesn't hold because: (*) loads are more likely to need to be completed immediately to permit execution progress, whereas stores can often be deferred without a problem; (*) loads may be done speculatively, and the result discarded should it prove to have been unnecessary; (*) loads may be done speculatively, leading to the result having been fetched at the wrong time in the expected sequence of events; (*) the order of the memory accesses may be rearranged to promote better use of the CPU buses and caches; (*) loads and stores may be combined to improve performance when talking to memory or I/O hardware that can do batched accesses of adjacent locations, thus cutting down on transaction setup costs (memory and PCI devices may both be able to do this); and (*) the CPU's data cache may affect the ordering, and while cache-coherency mechanisms may alleviate this - once the store has actually hit the cache - there's no guarantee that the coherency management will be propagated in order to other CPUs. So what another CPU, say, might actually observe from the above piece of code is: LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B (Where "LOAD {*C,*D}" is a combined load) However, it is guaranteed that a CPU will be self-consistent: it will see its _own_ accesses appear to be correctly ordered, without the need for a memory barrier. For instance with the following code: U = READ_ONCE(*A); WRITE_ONCE(*A, V); WRITE_ONCE(*A, W); X = READ_ONCE(*A); WRITE_ONCE(*A, Y); Z = READ_ONCE(*A); and assuming no intervention by an external influence, it can be assumed that the final result will appear to be: U == the original value of *A X == W Z == Y *A == Y The code above may cause the CPU to generate the full sequence of memory accesses: U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A in that order, but, without intervention, the sequence may have almost any combination of elements combined or discarded, provided the program's view of the world remains consistent. Note that READ_ONCE() and WRITE_ONCE() are -not- optional in the above example, as there are architectures where a given CPU might reorder successive loads to the same location. On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is necessary to prevent this, for example, on Itanium the volatile casts used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq and st.rel instructions (respectively) that prevent such reordering. The compiler may also combine, discard or defer elements of the sequence before the CPU even sees them. For instance: *A = V; *A = W; may be reduced to: *A = W; since, without either a write barrier or an WRITE_ONCE(), it can be assumed that the effect of the storage of V to *A is lost. Similarly: *A = Y; Z = *A; may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be reduced to: *A = Y; Z = Y; and the LOAD operation never appear outside of the CPU. AND THEN THERE'S THE ALPHA -------------------------- The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that, some versions of the Alpha CPU have a split data cache, permitting them to have two semantically-related cache lines updated at separate times. This is where the data dependency barrier really becomes necessary as this synchronises both caches with the memory coherence system, thus making it seem like pointer changes vs new data occur in the right order. The Alpha defines the Linux kernel's memory model, although as of v4.15 the Linux kernel's addition of smp_mb() to READ_ONCE() on Alpha greatly reduced its impact on the memory model. VIRTUAL MACHINE GUESTS ---------------------- Guests running within virtual machines might be affected by SMP effects even if the guest itself is compiled without SMP support. This is an artifact of interfacing with an SMP host while running an UP kernel. Using mandatory barriers for this use-case would be possible but is often suboptimal. To handle this case optimally, low-level virt_mb() etc macros are available. These have the same effect as smp_mb() etc when SMP is enabled, but generate identical code for SMP and non-SMP systems. For example, virtual machine guests should use virt_mb() rather than smp_mb() when synchronizing against a (possibly SMP) host. These are equivalent to smp_mb() etc counterparts in all other respects, in particular, they do not control MMIO effects: to control MMIO effects, use mandatory barriers. ============ EXAMPLE USES ============ CIRCULAR BUFFERS ---------------- Memory barriers can be used to implement circular buffering without the need of a lock to serialise the producer with the consumer. See: Documentation/core-api/circular-buffers.rst for details. ========== REFERENCES ========== Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek, Digital Press) Chapter 5.2: Physical Address Space Characteristics Chapter 5.4: Caches and Write Buffers Chapter 5.5: Data Sharing Chapter 5.6: Read/Write Ordering AMD64 Architecture Programmer's Manual Volume 2: System Programming Chapter 7.1: Memory-Access Ordering Chapter 7.4: Buffering and Combining Memory Writes ARM Architecture Reference Manual (ARMv8, for ARMv8-A architecture profile) Chapter B2: The AArch64 Application Level Memory Model IA-32 Intel Architecture Software Developer's Manual, Volume 3: System Programming Guide Chapter 7.1: Locked Atomic Operations Chapter 7.2: Memory Ordering Chapter 7.4: Serializing Instructions The SPARC Architecture Manual, Version 9 Chapter 8: Memory Models Appendix D: Formal Specification of the Memory Models Appendix J: Programming with the Memory Models Storage in the PowerPC (Stone and Fitzgerald) UltraSPARC Programmer Reference Manual Chapter 5: Memory Accesses and Cacheability Chapter 15: Sparc-V9 Memory Models UltraSPARC III Cu User's Manual Chapter 9: Memory Models UltraSPARC IIIi Processor User's Manual Chapter 8: Memory Models UltraSPARC Architecture 2005 Chapter 9: Memory Appendix D: Formal Specifications of the Memory Models UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005 Chapter 8: Memory Models Appendix F: Caches and Cache Coherency Solaris Internals, Core Kernel Architecture, p63-68: Chapter 3.3: Hardware Considerations for Locks and Synchronization Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching for Kernel Programmers: Chapter 13: Other Memory Models Intel Itanium Architecture Software Developer's Manual: Volume 1: Section 2.6: Speculation Section 4.4: Memory Access |