libbpf.c (f4a1692491b5cce8978cea19cb8946bc2c6f14bc) | libbpf.c (50450fc716c1a570ee8d8bfe198ef5d3cfca36e4) |
---|---|
1// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) 2 3/* 4 * Common eBPF ELF object loading operations. 5 * 6 * Copyright (C) 2013-2015 Alexei Starovoitov <ast@kernel.org> 7 * Copyright (C) 2015 Wang Nan <wangnan0@huawei.com> 8 * Copyright (C) 2015 Huawei Inc. --- 216 unchanged lines hidden (view full) --- 225 const struct bpf_sec_def *sec_def; 226 /* section_name with / replaced by _; makes recursive pinning 227 * in bpf_object__pin_programs easier 228 */ 229 char *pin_name; 230 struct bpf_insn *insns; 231 size_t insns_cnt, main_prog_cnt; 232 enum bpf_prog_type type; | 1// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) 2 3/* 4 * Common eBPF ELF object loading operations. 5 * 6 * Copyright (C) 2013-2015 Alexei Starovoitov <ast@kernel.org> 7 * Copyright (C) 2015 Wang Nan <wangnan0@huawei.com> 8 * Copyright (C) 2015 Huawei Inc. --- 216 unchanged lines hidden (view full) --- 225 const struct bpf_sec_def *sec_def; 226 /* section_name with / replaced by _; makes recursive pinning 227 * in bpf_object__pin_programs easier 228 */ 229 char *pin_name; 230 struct bpf_insn *insns; 231 size_t insns_cnt, main_prog_cnt; 232 enum bpf_prog_type type; |
233 bool load; |
|
233 234 struct reloc_desc *reloc_desc; 235 int nr_reloc; 236 int log_level; 237 238 struct { 239 int nr; 240 int *fds; --- 39 unchanged lines hidden (view full) --- 280 void *kern_vdata; 281 __u32 type_id; 282}; 283 284#define DATA_SEC ".data" 285#define BSS_SEC ".bss" 286#define RODATA_SEC ".rodata" 287#define KCONFIG_SEC ".kconfig" | 234 235 struct reloc_desc *reloc_desc; 236 int nr_reloc; 237 int log_level; 238 239 struct { 240 int nr; 241 int *fds; --- 39 unchanged lines hidden (view full) --- 281 void *kern_vdata; 282 __u32 type_id; 283}; 284 285#define DATA_SEC ".data" 286#define BSS_SEC ".bss" 287#define RODATA_SEC ".rodata" 288#define KCONFIG_SEC ".kconfig" |
289#define KSYMS_SEC ".ksyms" |
|
288#define STRUCT_OPS_SEC ".struct_ops" 289 290enum libbpf_map_type { 291 LIBBPF_MAP_UNSPEC, 292 LIBBPF_MAP_DATA, 293 LIBBPF_MAP_BSS, 294 LIBBPF_MAP_RODATA, 295 LIBBPF_MAP_KCONFIG, --- 9 unchanged lines hidden (view full) --- 305struct bpf_map { 306 char *name; 307 int fd; 308 int sec_idx; 309 size_t sec_offset; 310 int map_ifindex; 311 int inner_map_fd; 312 struct bpf_map_def def; | 290#define STRUCT_OPS_SEC ".struct_ops" 291 292enum libbpf_map_type { 293 LIBBPF_MAP_UNSPEC, 294 LIBBPF_MAP_DATA, 295 LIBBPF_MAP_BSS, 296 LIBBPF_MAP_RODATA, 297 LIBBPF_MAP_KCONFIG, --- 9 unchanged lines hidden (view full) --- 307struct bpf_map { 308 char *name; 309 int fd; 310 int sec_idx; 311 size_t sec_offset; 312 int map_ifindex; 313 int inner_map_fd; 314 struct bpf_map_def def; |
315 __u32 numa_node; |
|
313 __u32 btf_var_idx; 314 __u32 btf_key_type_id; 315 __u32 btf_value_type_id; 316 __u32 btf_vmlinux_value_type_id; 317 void *priv; 318 bpf_map_clear_priv_t clear_priv; 319 enum libbpf_map_type libbpf_type; 320 void *mmaped; 321 struct bpf_struct_ops *st_ops; 322 struct bpf_map *inner_map; 323 void **init_slots; 324 int init_slots_sz; 325 char *pin_path; 326 bool pinned; 327 bool reused; 328}; 329 330enum extern_type { 331 EXT_UNKNOWN, | 316 __u32 btf_var_idx; 317 __u32 btf_key_type_id; 318 __u32 btf_value_type_id; 319 __u32 btf_vmlinux_value_type_id; 320 void *priv; 321 bpf_map_clear_priv_t clear_priv; 322 enum libbpf_map_type libbpf_type; 323 void *mmaped; 324 struct bpf_struct_ops *st_ops; 325 struct bpf_map *inner_map; 326 void **init_slots; 327 int init_slots_sz; 328 char *pin_path; 329 bool pinned; 330 bool reused; 331}; 332 333enum extern_type { 334 EXT_UNKNOWN, |
332 EXT_CHAR, 333 EXT_BOOL, 334 EXT_INT, 335 EXT_TRISTATE, 336 EXT_CHAR_ARR, | 335 EXT_KCFG, 336 EXT_KSYM, |
337}; 338 | 337}; 338 |
339enum kcfg_type { 340 KCFG_UNKNOWN, 341 KCFG_CHAR, 342 KCFG_BOOL, 343 KCFG_INT, 344 KCFG_TRISTATE, 345 KCFG_CHAR_ARR, 346}; 347 |
|
339struct extern_desc { | 348struct extern_desc { |
340 const char *name; | 349 enum extern_type type; |
341 int sym_idx; 342 int btf_id; | 350 int sym_idx; 351 int btf_id; |
343 enum extern_type type; 344 int sz; 345 int align; 346 int data_off; 347 bool is_signed; 348 bool is_weak; | 352 int sec_btf_id; 353 const char *name; |
349 bool is_set; | 354 bool is_set; |
355 bool is_weak; 356 union { 357 struct { 358 enum kcfg_type type; 359 int sz; 360 int align; 361 int data_off; 362 bool is_signed; 363 } kcfg; 364 struct { 365 unsigned long long addr; 366 } ksym; 367 }; |
|
350}; 351 352static LIST_HEAD(bpf_objects_list); 353 354struct bpf_object { 355 char name[BPF_OBJ_NAME_LEN]; 356 char license[64]; 357 __u32 kern_version; --- 161 unchanged lines hidden (view full) --- 519 goto errout; 520 } 521 prog->insns_cnt = size / bpf_insn_sz; 522 memcpy(prog->insns, data, size); 523 prog->idx = idx; 524 prog->instances.fds = NULL; 525 prog->instances.nr = -1; 526 prog->type = BPF_PROG_TYPE_UNSPEC; | 368}; 369 370static LIST_HEAD(bpf_objects_list); 371 372struct bpf_object { 373 char name[BPF_OBJ_NAME_LEN]; 374 char license[64]; 375 __u32 kern_version; --- 161 unchanged lines hidden (view full) --- 537 goto errout; 538 } 539 prog->insns_cnt = size / bpf_insn_sz; 540 memcpy(prog->insns, data, size); 541 prog->idx = idx; 542 prog->instances.fds = NULL; 543 prog->instances.nr = -1; 544 prog->type = BPF_PROG_TYPE_UNSPEC; |
545 prog->load = true; |
|
527 528 return 0; 529errout: 530 bpf_program__exit(prog); 531 return -ENOMEM; 532} 533 534static int --- 883 unchanged lines hidden (view full) --- 1418 1419 for (i = 0; i < obj->nr_extern; i++) { 1420 if (strcmp(obj->externs[i].name, name) == 0) 1421 return &obj->externs[i]; 1422 } 1423 return NULL; 1424} 1425 | 546 547 return 0; 548errout: 549 bpf_program__exit(prog); 550 return -ENOMEM; 551} 552 553static int --- 883 unchanged lines hidden (view full) --- 1437 1438 for (i = 0; i < obj->nr_extern; i++) { 1439 if (strcmp(obj->externs[i].name, name) == 0) 1440 return &obj->externs[i]; 1441 } 1442 return NULL; 1443} 1444 |
1426static int set_ext_value_tri(struct extern_desc *ext, void *ext_val, 1427 char value) | 1445static int set_kcfg_value_tri(struct extern_desc *ext, void *ext_val, 1446 char value) |
1428{ | 1447{ |
1429 switch (ext->type) { 1430 case EXT_BOOL: | 1448 switch (ext->kcfg.type) { 1449 case KCFG_BOOL: |
1431 if (value == 'm') { | 1450 if (value == 'm') { |
1432 pr_warn("extern %s=%c should be tristate or char\n", | 1451 pr_warn("extern (kcfg) %s=%c should be tristate or char\n", |
1433 ext->name, value); 1434 return -EINVAL; 1435 } 1436 *(bool *)ext_val = value == 'y' ? true : false; 1437 break; | 1452 ext->name, value); 1453 return -EINVAL; 1454 } 1455 *(bool *)ext_val = value == 'y' ? true : false; 1456 break; |
1438 case EXT_TRISTATE: | 1457 case KCFG_TRISTATE: |
1439 if (value == 'y') 1440 *(enum libbpf_tristate *)ext_val = TRI_YES; 1441 else if (value == 'm') 1442 *(enum libbpf_tristate *)ext_val = TRI_MODULE; 1443 else /* value == 'n' */ 1444 *(enum libbpf_tristate *)ext_val = TRI_NO; 1445 break; | 1458 if (value == 'y') 1459 *(enum libbpf_tristate *)ext_val = TRI_YES; 1460 else if (value == 'm') 1461 *(enum libbpf_tristate *)ext_val = TRI_MODULE; 1462 else /* value == 'n' */ 1463 *(enum libbpf_tristate *)ext_val = TRI_NO; 1464 break; |
1446 case EXT_CHAR: | 1465 case KCFG_CHAR: |
1447 *(char *)ext_val = value; 1448 break; | 1466 *(char *)ext_val = value; 1467 break; |
1449 case EXT_UNKNOWN: 1450 case EXT_INT: 1451 case EXT_CHAR_ARR: | 1468 case KCFG_UNKNOWN: 1469 case KCFG_INT: 1470 case KCFG_CHAR_ARR: |
1452 default: | 1471 default: |
1453 pr_warn("extern %s=%c should be bool, tristate, or char\n", | 1472 pr_warn("extern (kcfg) %s=%c should be bool, tristate, or char\n", |
1454 ext->name, value); 1455 return -EINVAL; 1456 } 1457 ext->is_set = true; 1458 return 0; 1459} 1460 | 1473 ext->name, value); 1474 return -EINVAL; 1475 } 1476 ext->is_set = true; 1477 return 0; 1478} 1479 |
1461static int set_ext_value_str(struct extern_desc *ext, char *ext_val, 1462 const char *value) | 1480static int set_kcfg_value_str(struct extern_desc *ext, char *ext_val, 1481 const char *value) |
1463{ 1464 size_t len; 1465 | 1482{ 1483 size_t len; 1484 |
1466 if (ext->type != EXT_CHAR_ARR) { 1467 pr_warn("extern %s=%s should char array\n", ext->name, value); | 1485 if (ext->kcfg.type != KCFG_CHAR_ARR) { 1486 pr_warn("extern (kcfg) %s=%s should be char array\n", ext->name, value); |
1468 return -EINVAL; 1469 } 1470 1471 len = strlen(value); 1472 if (value[len - 1] != '"') { | 1487 return -EINVAL; 1488 } 1489 1490 len = strlen(value); 1491 if (value[len - 1] != '"') { |
1473 pr_warn("extern '%s': invalid string config '%s'\n", | 1492 pr_warn("extern (kcfg) '%s': invalid string config '%s'\n", |
1474 ext->name, value); 1475 return -EINVAL; 1476 } 1477 1478 /* strip quotes */ 1479 len -= 2; | 1493 ext->name, value); 1494 return -EINVAL; 1495 } 1496 1497 /* strip quotes */ 1498 len -= 2; |
1480 if (len >= ext->sz) { 1481 pr_warn("extern '%s': long string config %s of (%zu bytes) truncated to %d bytes\n", 1482 ext->name, value, len, ext->sz - 1); 1483 len = ext->sz - 1; | 1499 if (len >= ext->kcfg.sz) { 1500 pr_warn("extern (kcfg) '%s': long string config %s of (%zu bytes) truncated to %d bytes\n", 1501 ext->name, value, len, ext->kcfg.sz - 1); 1502 len = ext->kcfg.sz - 1; |
1484 } 1485 memcpy(ext_val, value + 1, len); 1486 ext_val[len] = '\0'; 1487 ext->is_set = true; 1488 return 0; 1489} 1490 1491static int parse_u64(const char *value, __u64 *res) --- 10 unchanged lines hidden (view full) --- 1502 } 1503 if (*value_end) { 1504 pr_warn("failed to parse '%s' as integer completely\n", value); 1505 return -EINVAL; 1506 } 1507 return 0; 1508} 1509 | 1503 } 1504 memcpy(ext_val, value + 1, len); 1505 ext_val[len] = '\0'; 1506 ext->is_set = true; 1507 return 0; 1508} 1509 1510static int parse_u64(const char *value, __u64 *res) --- 10 unchanged lines hidden (view full) --- 1521 } 1522 if (*value_end) { 1523 pr_warn("failed to parse '%s' as integer completely\n", value); 1524 return -EINVAL; 1525 } 1526 return 0; 1527} 1528 |
1510static bool is_ext_value_in_range(const struct extern_desc *ext, __u64 v) | 1529static bool is_kcfg_value_in_range(const struct extern_desc *ext, __u64 v) |
1511{ | 1530{ |
1512 int bit_sz = ext->sz * 8; | 1531 int bit_sz = ext->kcfg.sz * 8; |
1513 | 1532 |
1514 if (ext->sz == 8) | 1533 if (ext->kcfg.sz == 8) |
1515 return true; 1516 1517 /* Validate that value stored in u64 fits in integer of `ext->sz` 1518 * bytes size without any loss of information. If the target integer 1519 * is signed, we rely on the following limits of integer type of 1520 * Y bits and subsequent transformation: 1521 * 1522 * -2^(Y-1) <= X <= 2^(Y-1) - 1 1523 * 0 <= X + 2^(Y-1) <= 2^Y - 1 1524 * 0 <= X + 2^(Y-1) < 2^Y 1525 * 1526 * For unsigned target integer, check that all the (64 - Y) bits are 1527 * zero. 1528 */ | 1534 return true; 1535 1536 /* Validate that value stored in u64 fits in integer of `ext->sz` 1537 * bytes size without any loss of information. If the target integer 1538 * is signed, we rely on the following limits of integer type of 1539 * Y bits and subsequent transformation: 1540 * 1541 * -2^(Y-1) <= X <= 2^(Y-1) - 1 1542 * 0 <= X + 2^(Y-1) <= 2^Y - 1 1543 * 0 <= X + 2^(Y-1) < 2^Y 1544 * 1545 * For unsigned target integer, check that all the (64 - Y) bits are 1546 * zero. 1547 */ |
1529 if (ext->is_signed) | 1548 if (ext->kcfg.is_signed) |
1530 return v + (1ULL << (bit_sz - 1)) < (1ULL << bit_sz); 1531 else 1532 return (v >> bit_sz) == 0; 1533} 1534 | 1549 return v + (1ULL << (bit_sz - 1)) < (1ULL << bit_sz); 1550 else 1551 return (v >> bit_sz) == 0; 1552} 1553 |
1535static int set_ext_value_num(struct extern_desc *ext, void *ext_val, 1536 __u64 value) | 1554static int set_kcfg_value_num(struct extern_desc *ext, void *ext_val, 1555 __u64 value) |
1537{ | 1556{ |
1538 if (ext->type != EXT_INT && ext->type != EXT_CHAR) { 1539 pr_warn("extern %s=%llu should be integer\n", | 1557 if (ext->kcfg.type != KCFG_INT && ext->kcfg.type != KCFG_CHAR) { 1558 pr_warn("extern (kcfg) %s=%llu should be integer\n", |
1540 ext->name, (unsigned long long)value); 1541 return -EINVAL; 1542 } | 1559 ext->name, (unsigned long long)value); 1560 return -EINVAL; 1561 } |
1543 if (!is_ext_value_in_range(ext, value)) { 1544 pr_warn("extern %s=%llu value doesn't fit in %d bytes\n", 1545 ext->name, (unsigned long long)value, ext->sz); | 1562 if (!is_kcfg_value_in_range(ext, value)) { 1563 pr_warn("extern (kcfg) %s=%llu value doesn't fit in %d bytes\n", 1564 ext->name, (unsigned long long)value, ext->kcfg.sz); |
1546 return -ERANGE; 1547 } | 1565 return -ERANGE; 1566 } |
1548 switch (ext->sz) { | 1567 switch (ext->kcfg.sz) { |
1549 case 1: *(__u8 *)ext_val = value; break; 1550 case 2: *(__u16 *)ext_val = value; break; 1551 case 4: *(__u32 *)ext_val = value; break; 1552 case 8: *(__u64 *)ext_val = value; break; 1553 default: 1554 return -EINVAL; 1555 } 1556 ext->is_set = true; --- 29 unchanged lines hidden (view full) --- 1586 pr_warn("failed to parse '%s': no value\n", buf); 1587 return -EINVAL; 1588 } 1589 1590 ext = find_extern_by_name(obj, buf); 1591 if (!ext || ext->is_set) 1592 return 0; 1593 | 1568 case 1: *(__u8 *)ext_val = value; break; 1569 case 2: *(__u16 *)ext_val = value; break; 1570 case 4: *(__u32 *)ext_val = value; break; 1571 case 8: *(__u64 *)ext_val = value; break; 1572 default: 1573 return -EINVAL; 1574 } 1575 ext->is_set = true; --- 29 unchanged lines hidden (view full) --- 1605 pr_warn("failed to parse '%s': no value\n", buf); 1606 return -EINVAL; 1607 } 1608 1609 ext = find_extern_by_name(obj, buf); 1610 if (!ext || ext->is_set) 1611 return 0; 1612 |
1594 ext_val = data + ext->data_off; | 1613 ext_val = data + ext->kcfg.data_off; |
1595 value = sep + 1; 1596 1597 switch (*value) { 1598 case 'y': case 'n': case 'm': | 1614 value = sep + 1; 1615 1616 switch (*value) { 1617 case 'y': case 'n': case 'm': |
1599 err = set_ext_value_tri(ext, ext_val, *value); | 1618 err = set_kcfg_value_tri(ext, ext_val, *value); |
1600 break; 1601 case '"': | 1619 break; 1620 case '"': |
1602 err = set_ext_value_str(ext, ext_val, value); | 1621 err = set_kcfg_value_str(ext, ext_val, value); |
1603 break; 1604 default: 1605 /* assume integer */ 1606 err = parse_u64(value, &num); 1607 if (err) { | 1622 break; 1623 default: 1624 /* assume integer */ 1625 err = parse_u64(value, &num); 1626 if (err) { |
1608 pr_warn("extern %s=%s should be integer\n", | 1627 pr_warn("extern (kcfg) %s=%s should be integer\n", |
1609 ext->name, value); 1610 return err; 1611 } | 1628 ext->name, value); 1629 return err; 1630 } |
1612 err = set_ext_value_num(ext, ext_val, num); | 1631 err = set_kcfg_value_num(ext, ext_val, num); |
1613 break; 1614 } 1615 if (err) 1616 return err; | 1632 break; 1633 } 1634 if (err) 1635 return err; |
1617 pr_debug("extern %s=%s\n", ext->name, value); | 1636 pr_debug("extern (kcfg) %s=%s\n", ext->name, value); |
1618 return 0; 1619} 1620 1621static int bpf_object__read_kconfig_file(struct bpf_object *obj, void *data) 1622{ 1623 char buf[PATH_MAX]; 1624 struct utsname uts; 1625 int len, err = 0; --- 54 unchanged lines hidden (view full) --- 1680 } 1681 1682 fclose(file); 1683 return err; 1684} 1685 1686static int bpf_object__init_kconfig_map(struct bpf_object *obj) 1687{ | 1637 return 0; 1638} 1639 1640static int bpf_object__read_kconfig_file(struct bpf_object *obj, void *data) 1641{ 1642 char buf[PATH_MAX]; 1643 struct utsname uts; 1644 int len, err = 0; --- 54 unchanged lines hidden (view full) --- 1699 } 1700 1701 fclose(file); 1702 return err; 1703} 1704 1705static int bpf_object__init_kconfig_map(struct bpf_object *obj) 1706{ |
1688 struct extern_desc *last_ext; | 1707 struct extern_desc *last_ext = NULL, *ext; |
1689 size_t map_sz; | 1708 size_t map_sz; |
1690 int err; | 1709 int i, err; |
1691 | 1710 |
1692 if (obj->nr_extern == 0) | 1711 for (i = 0; i < obj->nr_extern; i++) { 1712 ext = &obj->externs[i]; 1713 if (ext->type == EXT_KCFG) 1714 last_ext = ext; 1715 } 1716 1717 if (!last_ext) |
1693 return 0; 1694 | 1718 return 0; 1719 |
1695 last_ext = &obj->externs[obj->nr_extern - 1]; 1696 map_sz = last_ext->data_off + last_ext->sz; 1697 | 1720 map_sz = last_ext->kcfg.data_off + last_ext->kcfg.sz; |
1698 err = bpf_object__init_internal_map(obj, LIBBPF_MAP_KCONFIG, 1699 obj->efile.symbols_shndx, 1700 NULL, map_sz); 1701 if (err) 1702 return err; 1703 1704 obj->kconfig_map_idx = obj->nr_maps - 1; 1705 --- 246 unchanged lines hidden (view full) --- 1952 pr_debug("map '%s': found max_entries = %u.\n", 1953 map->name, map->def.max_entries); 1954 } else if (strcmp(name, "map_flags") == 0) { 1955 if (!get_map_field_int(map->name, obj->btf, m, 1956 &map->def.map_flags)) 1957 return -EINVAL; 1958 pr_debug("map '%s': found map_flags = %u.\n", 1959 map->name, map->def.map_flags); | 1721 err = bpf_object__init_internal_map(obj, LIBBPF_MAP_KCONFIG, 1722 obj->efile.symbols_shndx, 1723 NULL, map_sz); 1724 if (err) 1725 return err; 1726 1727 obj->kconfig_map_idx = obj->nr_maps - 1; 1728 --- 246 unchanged lines hidden (view full) --- 1975 pr_debug("map '%s': found max_entries = %u.\n", 1976 map->name, map->def.max_entries); 1977 } else if (strcmp(name, "map_flags") == 0) { 1978 if (!get_map_field_int(map->name, obj->btf, m, 1979 &map->def.map_flags)) 1980 return -EINVAL; 1981 pr_debug("map '%s': found map_flags = %u.\n", 1982 map->name, map->def.map_flags); |
1983 } else if (strcmp(name, "numa_node") == 0) { 1984 if (!get_map_field_int(map->name, obj->btf, m, &map->numa_node)) 1985 return -EINVAL; 1986 pr_debug("map '%s': found numa_node = %u.\n", map->name, map->numa_node); |
|
1960 } else if (strcmp(name, "key_size") == 0) { 1961 __u32 sz; 1962 1963 if (!get_map_field_int(map->name, obj->btf, m, &sz)) 1964 return -EINVAL; 1965 pr_debug("map '%s': found key_size = %u.\n", 1966 map->name, sz); 1967 if (map->def.key_size && map->def.key_size != sz) { --- 338 unchanged lines hidden (view full) --- 2306 return false; 2307 2308 if (sh.sh_flags & SHF_EXECINSTR) 2309 return true; 2310 2311 return false; 2312} 2313 | 1987 } else if (strcmp(name, "key_size") == 0) { 1988 __u32 sz; 1989 1990 if (!get_map_field_int(map->name, obj->btf, m, &sz)) 1991 return -EINVAL; 1992 pr_debug("map '%s': found key_size = %u.\n", 1993 map->name, sz); 1994 if (map->def.key_size && map->def.key_size != sz) { --- 338 unchanged lines hidden (view full) --- 2333 return false; 2334 2335 if (sh.sh_flags & SHF_EXECINSTR) 2336 return true; 2337 2338 return false; 2339} 2340 |
2314static void bpf_object__sanitize_btf(struct bpf_object *obj) | 2341static bool btf_needs_sanitization(struct bpf_object *obj) |
2315{ 2316 bool has_func_global = obj->caps.btf_func_global; 2317 bool has_datasec = obj->caps.btf_datasec; 2318 bool has_func = obj->caps.btf_func; | 2342{ 2343 bool has_func_global = obj->caps.btf_func_global; 2344 bool has_datasec = obj->caps.btf_datasec; 2345 bool has_func = obj->caps.btf_func; |
2319 struct btf *btf = obj->btf; | 2346 2347 return !has_func || !has_datasec || !has_func_global; 2348} 2349 2350static void bpf_object__sanitize_btf(struct bpf_object *obj, struct btf *btf) 2351{ 2352 bool has_func_global = obj->caps.btf_func_global; 2353 bool has_datasec = obj->caps.btf_datasec; 2354 bool has_func = obj->caps.btf_func; |
2320 struct btf_type *t; 2321 int i, j, vlen; 2322 | 2355 struct btf_type *t; 2356 int i, j, vlen; 2357 |
2323 if (!obj->btf || (has_func && has_datasec && has_func_global)) 2324 return; 2325 | |
2326 for (i = 1; i <= btf__get_nr_types(btf); i++) { 2327 t = (struct btf_type *)btf__type_by_id(btf, i); 2328 2329 if (!has_datasec && btf_is_var(t)) { 2330 /* replace VAR with INT */ 2331 t->info = BTF_INFO_ENC(BTF_KIND_INT, 0, 0); 2332 /* 2333 * using size = 1 is the safest choice, 4 will be too --- 36 unchanged lines hidden (view full) --- 2370 t->info = BTF_INFO_ENC(BTF_KIND_TYPEDEF, 0, 0); 2371 } else if (!has_func_global && btf_is_func(t)) { 2372 /* replace BTF_FUNC_GLOBAL with BTF_FUNC_STATIC */ 2373 t->info = BTF_INFO_ENC(BTF_KIND_FUNC, 0, 0); 2374 } 2375 } 2376} 2377 | 2358 for (i = 1; i <= btf__get_nr_types(btf); i++) { 2359 t = (struct btf_type *)btf__type_by_id(btf, i); 2360 2361 if (!has_datasec && btf_is_var(t)) { 2362 /* replace VAR with INT */ 2363 t->info = BTF_INFO_ENC(BTF_KIND_INT, 0, 0); 2364 /* 2365 * using size = 1 is the safest choice, 4 will be too --- 36 unchanged lines hidden (view full) --- 2402 t->info = BTF_INFO_ENC(BTF_KIND_TYPEDEF, 0, 0); 2403 } else if (!has_func_global && btf_is_func(t)) { 2404 /* replace BTF_FUNC_GLOBAL with BTF_FUNC_STATIC */ 2405 t->info = BTF_INFO_ENC(BTF_KIND_FUNC, 0, 0); 2406 } 2407 } 2408} 2409 |
2378static void bpf_object__sanitize_btf_ext(struct bpf_object *obj) 2379{ 2380 if (!obj->btf_ext) 2381 return; 2382 2383 if (!obj->caps.btf_func) { 2384 btf_ext__free(obj->btf_ext); 2385 obj->btf_ext = NULL; 2386 } 2387} 2388 | |
2389static bool libbpf_needs_btf(const struct bpf_object *obj) 2390{ 2391 return obj->efile.btf_maps_shndx >= 0 || 2392 obj->efile.st_ops_shndx >= 0 || 2393 obj->nr_extern > 0; 2394} 2395 2396static bool kernel_needs_btf(const struct bpf_object *obj) --- 44 unchanged lines hidden (view full) --- 2441static int bpf_object__finalize_btf(struct bpf_object *obj) 2442{ 2443 int err; 2444 2445 if (!obj->btf) 2446 return 0; 2447 2448 err = btf__finalize_data(obj, obj->btf); | 2410static bool libbpf_needs_btf(const struct bpf_object *obj) 2411{ 2412 return obj->efile.btf_maps_shndx >= 0 || 2413 obj->efile.st_ops_shndx >= 0 || 2414 obj->nr_extern > 0; 2415} 2416 2417static bool kernel_needs_btf(const struct bpf_object *obj) --- 44 unchanged lines hidden (view full) --- 2462static int bpf_object__finalize_btf(struct bpf_object *obj) 2463{ 2464 int err; 2465 2466 if (!obj->btf) 2467 return 0; 2468 2469 err = btf__finalize_data(obj, obj->btf); |
2449 if (!err) 2450 return 0; 2451 2452 pr_warn("Error finalizing %s: %d.\n", BTF_ELF_SEC, err); 2453 btf__free(obj->btf); 2454 obj->btf = NULL; 2455 btf_ext__free(obj->btf_ext); 2456 obj->btf_ext = NULL; 2457 2458 if (libbpf_needs_btf(obj)) { 2459 pr_warn("BTF is required, but is missing or corrupted.\n"); 2460 return -ENOENT; | 2470 if (err) { 2471 pr_warn("Error finalizing %s: %d.\n", BTF_ELF_SEC, err); 2472 return err; |
2461 } | 2473 } |
2474 |
|
2462 return 0; 2463} 2464 2465static inline bool libbpf_prog_needs_vmlinux_btf(struct bpf_program *prog) 2466{ 2467 if (prog->type == BPF_PROG_TYPE_STRUCT_OPS || 2468 prog->type == BPF_PROG_TYPE_LSM) 2469 return true; --- 4 unchanged lines hidden (view full) --- 2474 if (prog->type == BPF_PROG_TYPE_TRACING && !prog->attach_prog_fd) 2475 return true; 2476 2477 return false; 2478} 2479 2480static int bpf_object__load_vmlinux_btf(struct bpf_object *obj) 2481{ | 2475 return 0; 2476} 2477 2478static inline bool libbpf_prog_needs_vmlinux_btf(struct bpf_program *prog) 2479{ 2480 if (prog->type == BPF_PROG_TYPE_STRUCT_OPS || 2481 prog->type == BPF_PROG_TYPE_LSM) 2482 return true; --- 4 unchanged lines hidden (view full) --- 2487 if (prog->type == BPF_PROG_TYPE_TRACING && !prog->attach_prog_fd) 2488 return true; 2489 2490 return false; 2491} 2492 2493static int bpf_object__load_vmlinux_btf(struct bpf_object *obj) 2494{ |
2495 bool need_vmlinux_btf = false; |
|
2482 struct bpf_program *prog; 2483 int err; 2484 | 2496 struct bpf_program *prog; 2497 int err; 2498 |
2499 /* CO-RE relocations need kernel BTF */ 2500 if (obj->btf_ext && obj->btf_ext->field_reloc_info.len) 2501 need_vmlinux_btf = true; 2502 |
|
2485 bpf_object__for_each_program(prog, obj) { | 2503 bpf_object__for_each_program(prog, obj) { |
2504 if (!prog->load) 2505 continue; |
|
2486 if (libbpf_prog_needs_vmlinux_btf(prog)) { | 2506 if (libbpf_prog_needs_vmlinux_btf(prog)) { |
2487 obj->btf_vmlinux = libbpf_find_kernel_btf(); 2488 if (IS_ERR(obj->btf_vmlinux)) { 2489 err = PTR_ERR(obj->btf_vmlinux); 2490 pr_warn("Error loading vmlinux BTF: %d\n", err); 2491 obj->btf_vmlinux = NULL; 2492 return err; 2493 } 2494 return 0; | 2507 need_vmlinux_btf = true; 2508 break; |
2495 } 2496 } 2497 | 2509 } 2510 } 2511 |
2512 if (!need_vmlinux_btf) 2513 return 0; 2514 2515 obj->btf_vmlinux = libbpf_find_kernel_btf(); 2516 if (IS_ERR(obj->btf_vmlinux)) { 2517 err = PTR_ERR(obj->btf_vmlinux); 2518 pr_warn("Error loading vmlinux BTF: %d\n", err); 2519 obj->btf_vmlinux = NULL; 2520 return err; 2521 } |
|
2498 return 0; 2499} 2500 2501static int bpf_object__sanitize_and_load_btf(struct bpf_object *obj) 2502{ | 2522 return 0; 2523} 2524 2525static int bpf_object__sanitize_and_load_btf(struct bpf_object *obj) 2526{ |
2527 struct btf *kern_btf = obj->btf; 2528 bool btf_mandatory, sanitize; |
|
2503 int err = 0; 2504 2505 if (!obj->btf) 2506 return 0; 2507 | 2529 int err = 0; 2530 2531 if (!obj->btf) 2532 return 0; 2533 |
2508 bpf_object__sanitize_btf(obj); 2509 bpf_object__sanitize_btf_ext(obj); | 2534 sanitize = btf_needs_sanitization(obj); 2535 if (sanitize) { 2536 const void *raw_data; 2537 __u32 sz; |
2510 | 2538 |
2511 err = btf__load(obj->btf); 2512 if (err) { 2513 pr_warn("Error loading %s into kernel: %d.\n", 2514 BTF_ELF_SEC, err); 2515 btf__free(obj->btf); 2516 obj->btf = NULL; 2517 /* btf_ext can't exist without btf, so free it as well */ 2518 if (obj->btf_ext) { 2519 btf_ext__free(obj->btf_ext); 2520 obj->btf_ext = NULL; 2521 } | 2539 /* clone BTF to sanitize a copy and leave the original intact */ 2540 raw_data = btf__get_raw_data(obj->btf, &sz); 2541 kern_btf = btf__new(raw_data, sz); 2542 if (IS_ERR(kern_btf)) 2543 return PTR_ERR(kern_btf); |
2522 | 2544 |
2523 if (kernel_needs_btf(obj)) 2524 return err; | 2545 bpf_object__sanitize_btf(obj, kern_btf); |
2525 } | 2546 } |
2526 return 0; | 2547 2548 err = btf__load(kern_btf); 2549 if (sanitize) { 2550 if (!err) { 2551 /* move fd to libbpf's BTF */ 2552 btf__set_fd(obj->btf, btf__fd(kern_btf)); 2553 btf__set_fd(kern_btf, -1); 2554 } 2555 btf__free(kern_btf); 2556 } 2557 if (err) { 2558 btf_mandatory = kernel_needs_btf(obj); 2559 pr_warn("Error loading .BTF into kernel: %d. %s\n", err, 2560 btf_mandatory ? "BTF is mandatory, can't proceed." 2561 : "BTF is optional, ignoring."); 2562 if (!btf_mandatory) 2563 err = 0; 2564 } 2565 return err; |
2527} 2528 2529static int bpf_object__elf_collect(struct bpf_object *obj) 2530{ 2531 Elf *elf = obj->efile.elf; 2532 GElf_Ehdr *ep = &obj->efile.ehdr; 2533 Elf_Data *btf_ext_data = NULL; 2534 Elf_Data *btf_data = NULL; --- 169 unchanged lines hidden (view full) --- 2704 return -EINVAL; 2705 2706 return i; 2707 } 2708 2709 return -ENOENT; 2710} 2711 | 2566} 2567 2568static int bpf_object__elf_collect(struct bpf_object *obj) 2569{ 2570 Elf *elf = obj->efile.elf; 2571 GElf_Ehdr *ep = &obj->efile.ehdr; 2572 Elf_Data *btf_ext_data = NULL; 2573 Elf_Data *btf_data = NULL; --- 169 unchanged lines hidden (view full) --- 2743 return -EINVAL; 2744 2745 return i; 2746 } 2747 2748 return -ENOENT; 2749} 2750 |
2712static enum extern_type find_extern_type(const struct btf *btf, int id, 2713 bool *is_signed) | 2751static int find_extern_sec_btf_id(struct btf *btf, int ext_btf_id) { 2752 const struct btf_var_secinfo *vs; 2753 const struct btf_type *t; 2754 int i, j, n; 2755 2756 if (!btf) 2757 return -ESRCH; 2758 2759 n = btf__get_nr_types(btf); 2760 for (i = 1; i <= n; i++) { 2761 t = btf__type_by_id(btf, i); 2762 2763 if (!btf_is_datasec(t)) 2764 continue; 2765 2766 vs = btf_var_secinfos(t); 2767 for (j = 0; j < btf_vlen(t); j++, vs++) { 2768 if (vs->type == ext_btf_id) 2769 return i; 2770 } 2771 } 2772 2773 return -ENOENT; 2774} 2775 2776static enum kcfg_type find_kcfg_type(const struct btf *btf, int id, 2777 bool *is_signed) |
2714{ 2715 const struct btf_type *t; 2716 const char *name; 2717 2718 t = skip_mods_and_typedefs(btf, id, NULL); 2719 name = btf__name_by_offset(btf, t->name_off); 2720 2721 if (is_signed) 2722 *is_signed = false; 2723 switch (btf_kind(t)) { 2724 case BTF_KIND_INT: { 2725 int enc = btf_int_encoding(t); 2726 2727 if (enc & BTF_INT_BOOL) | 2778{ 2779 const struct btf_type *t; 2780 const char *name; 2781 2782 t = skip_mods_and_typedefs(btf, id, NULL); 2783 name = btf__name_by_offset(btf, t->name_off); 2784 2785 if (is_signed) 2786 *is_signed = false; 2787 switch (btf_kind(t)) { 2788 case BTF_KIND_INT: { 2789 int enc = btf_int_encoding(t); 2790 2791 if (enc & BTF_INT_BOOL) |
2728 return t->size == 1 ? EXT_BOOL : EXT_UNKNOWN; | 2792 return t->size == 1 ? KCFG_BOOL : KCFG_UNKNOWN; |
2729 if (is_signed) 2730 *is_signed = enc & BTF_INT_SIGNED; 2731 if (t->size == 1) | 2793 if (is_signed) 2794 *is_signed = enc & BTF_INT_SIGNED; 2795 if (t->size == 1) |
2732 return EXT_CHAR; | 2796 return KCFG_CHAR; |
2733 if (t->size < 1 || t->size > 8 || (t->size & (t->size - 1))) | 2797 if (t->size < 1 || t->size > 8 || (t->size & (t->size - 1))) |
2734 return EXT_UNKNOWN; 2735 return EXT_INT; | 2798 return KCFG_UNKNOWN; 2799 return KCFG_INT; |
2736 } 2737 case BTF_KIND_ENUM: 2738 if (t->size != 4) | 2800 } 2801 case BTF_KIND_ENUM: 2802 if (t->size != 4) |
2739 return EXT_UNKNOWN; | 2803 return KCFG_UNKNOWN; |
2740 if (strcmp(name, "libbpf_tristate")) | 2804 if (strcmp(name, "libbpf_tristate")) |
2741 return EXT_UNKNOWN; 2742 return EXT_TRISTATE; | 2805 return KCFG_UNKNOWN; 2806 return KCFG_TRISTATE; |
2743 case BTF_KIND_ARRAY: 2744 if (btf_array(t)->nelems == 0) | 2807 case BTF_KIND_ARRAY: 2808 if (btf_array(t)->nelems == 0) |
2745 return EXT_UNKNOWN; 2746 if (find_extern_type(btf, btf_array(t)->type, NULL) != EXT_CHAR) 2747 return EXT_UNKNOWN; 2748 return EXT_CHAR_ARR; | 2809 return KCFG_UNKNOWN; 2810 if (find_kcfg_type(btf, btf_array(t)->type, NULL) != KCFG_CHAR) 2811 return KCFG_UNKNOWN; 2812 return KCFG_CHAR_ARR; |
2749 default: | 2813 default: |
2750 return EXT_UNKNOWN; | 2814 return KCFG_UNKNOWN; |
2751 } 2752} 2753 2754static int cmp_externs(const void *_a, const void *_b) 2755{ 2756 const struct extern_desc *a = _a; 2757 const struct extern_desc *b = _b; 2758 | 2815 } 2816} 2817 2818static int cmp_externs(const void *_a, const void *_b) 2819{ 2820 const struct extern_desc *a = _a; 2821 const struct extern_desc *b = _b; 2822 |
2759 /* descending order by alignment requirements */ 2760 if (a->align != b->align) 2761 return a->align > b->align ? -1 : 1; 2762 /* ascending order by size, within same alignment class */ 2763 if (a->sz != b->sz) 2764 return a->sz < b->sz ? -1 : 1; | 2823 if (a->type != b->type) 2824 return a->type < b->type ? -1 : 1; 2825 2826 if (a->type == EXT_KCFG) { 2827 /* descending order by alignment requirements */ 2828 if (a->kcfg.align != b->kcfg.align) 2829 return a->kcfg.align > b->kcfg.align ? -1 : 1; 2830 /* ascending order by size, within same alignment class */ 2831 if (a->kcfg.sz != b->kcfg.sz) 2832 return a->kcfg.sz < b->kcfg.sz ? -1 : 1; 2833 } 2834 |
2765 /* resolve ties by name */ 2766 return strcmp(a->name, b->name); 2767} 2768 | 2835 /* resolve ties by name */ 2836 return strcmp(a->name, b->name); 2837} 2838 |
2839static int find_int_btf_id(const struct btf *btf) 2840{ 2841 const struct btf_type *t; 2842 int i, n; 2843 2844 n = btf__get_nr_types(btf); 2845 for (i = 1; i <= n; i++) { 2846 t = btf__type_by_id(btf, i); 2847 2848 if (btf_is_int(t) && btf_int_bits(t) == 32) 2849 return i; 2850 } 2851 2852 return 0; 2853} 2854 |
|
2769static int bpf_object__collect_externs(struct bpf_object *obj) 2770{ | 2855static int bpf_object__collect_externs(struct bpf_object *obj) 2856{ |
2857 struct btf_type *sec, *kcfg_sec = NULL, *ksym_sec = NULL; |
|
2771 const struct btf_type *t; 2772 struct extern_desc *ext; | 2858 const struct btf_type *t; 2859 struct extern_desc *ext; |
2773 int i, n, off, btf_id; 2774 struct btf_type *sec; 2775 const char *ext_name; | 2860 int i, n, off; 2861 const char *ext_name, *sec_name; |
2776 Elf_Scn *scn; 2777 GElf_Shdr sh; 2778 2779 if (!obj->efile.symbols) 2780 return 0; 2781 2782 scn = elf_getscn(obj->efile.elf, obj->efile.symbols_shndx); 2783 if (!scn) --- 29 unchanged lines hidden (view full) --- 2813 pr_warn("failed to find BTF for extern '%s': %d\n", 2814 ext_name, ext->btf_id); 2815 return ext->btf_id; 2816 } 2817 t = btf__type_by_id(obj->btf, ext->btf_id); 2818 ext->name = btf__name_by_offset(obj->btf, t->name_off); 2819 ext->sym_idx = i; 2820 ext->is_weak = GELF_ST_BIND(sym.st_info) == STB_WEAK; | 2862 Elf_Scn *scn; 2863 GElf_Shdr sh; 2864 2865 if (!obj->efile.symbols) 2866 return 0; 2867 2868 scn = elf_getscn(obj->efile.elf, obj->efile.symbols_shndx); 2869 if (!scn) --- 29 unchanged lines hidden (view full) --- 2899 pr_warn("failed to find BTF for extern '%s': %d\n", 2900 ext_name, ext->btf_id); 2901 return ext->btf_id; 2902 } 2903 t = btf__type_by_id(obj->btf, ext->btf_id); 2904 ext->name = btf__name_by_offset(obj->btf, t->name_off); 2905 ext->sym_idx = i; 2906 ext->is_weak = GELF_ST_BIND(sym.st_info) == STB_WEAK; |
2821 ext->sz = btf__resolve_size(obj->btf, t->type); 2822 if (ext->sz <= 0) { 2823 pr_warn("failed to resolve size of extern '%s': %d\n", 2824 ext_name, ext->sz); 2825 return ext->sz; | 2907 2908 ext->sec_btf_id = find_extern_sec_btf_id(obj->btf, ext->btf_id); 2909 if (ext->sec_btf_id <= 0) { 2910 pr_warn("failed to find BTF for extern '%s' [%d] section: %d\n", 2911 ext_name, ext->btf_id, ext->sec_btf_id); 2912 return ext->sec_btf_id; |
2826 } | 2913 } |
2827 ext->align = btf__align_of(obj->btf, t->type); 2828 if (ext->align <= 0) { 2829 pr_warn("failed to determine alignment of extern '%s': %d\n", 2830 ext_name, ext->align); 2831 return -EINVAL; 2832 } 2833 ext->type = find_extern_type(obj->btf, t->type, 2834 &ext->is_signed); 2835 if (ext->type == EXT_UNKNOWN) { 2836 pr_warn("extern '%s' type is unsupported\n", ext_name); | 2914 sec = (void *)btf__type_by_id(obj->btf, ext->sec_btf_id); 2915 sec_name = btf__name_by_offset(obj->btf, sec->name_off); 2916 2917 if (strcmp(sec_name, KCONFIG_SEC) == 0) { 2918 kcfg_sec = sec; 2919 ext->type = EXT_KCFG; 2920 ext->kcfg.sz = btf__resolve_size(obj->btf, t->type); 2921 if (ext->kcfg.sz <= 0) { 2922 pr_warn("failed to resolve size of extern (kcfg) '%s': %d\n", 2923 ext_name, ext->kcfg.sz); 2924 return ext->kcfg.sz; 2925 } 2926 ext->kcfg.align = btf__align_of(obj->btf, t->type); 2927 if (ext->kcfg.align <= 0) { 2928 pr_warn("failed to determine alignment of extern (kcfg) '%s': %d\n", 2929 ext_name, ext->kcfg.align); 2930 return -EINVAL; 2931 } 2932 ext->kcfg.type = find_kcfg_type(obj->btf, t->type, 2933 &ext->kcfg.is_signed); 2934 if (ext->kcfg.type == KCFG_UNKNOWN) { 2935 pr_warn("extern (kcfg) '%s' type is unsupported\n", ext_name); 2936 return -ENOTSUP; 2937 } 2938 } else if (strcmp(sec_name, KSYMS_SEC) == 0) { 2939 const struct btf_type *vt; 2940 2941 ksym_sec = sec; 2942 ext->type = EXT_KSYM; 2943 2944 vt = skip_mods_and_typedefs(obj->btf, t->type, NULL); 2945 if (!btf_is_void(vt)) { 2946 pr_warn("extern (ksym) '%s' is not typeless (void)\n", ext_name); 2947 return -ENOTSUP; 2948 } 2949 } else { 2950 pr_warn("unrecognized extern section '%s'\n", sec_name); |
2837 return -ENOTSUP; 2838 } 2839 } 2840 pr_debug("collected %d externs total\n", obj->nr_extern); 2841 2842 if (!obj->nr_extern) 2843 return 0; 2844 | 2951 return -ENOTSUP; 2952 } 2953 } 2954 pr_debug("collected %d externs total\n", obj->nr_extern); 2955 2956 if (!obj->nr_extern) 2957 return 0; 2958 |
2845 /* sort externs by (alignment, size, name) and calculate their offsets 2846 * within a map */ | 2959 /* sort externs by type, for kcfg ones also by (align, size, name) */ |
2847 qsort(obj->externs, obj->nr_extern, sizeof(*ext), cmp_externs); | 2960 qsort(obj->externs, obj->nr_extern, sizeof(*ext), cmp_externs); |
2848 off = 0; 2849 for (i = 0; i < obj->nr_extern; i++) { 2850 ext = &obj->externs[i]; 2851 ext->data_off = roundup(off, ext->align); 2852 off = ext->data_off + ext->sz; 2853 pr_debug("extern #%d: symbol %d, off %u, name %s\n", 2854 i, ext->sym_idx, ext->data_off, ext->name); 2855 } | |
2856 | 2961 |
2857 btf_id = btf__find_by_name(obj->btf, KCONFIG_SEC); 2858 if (btf_id <= 0) { 2859 pr_warn("no BTF info found for '%s' datasec\n", KCONFIG_SEC); 2860 return -ESRCH; 2861 } | 2962 /* for .ksyms section, we need to turn all externs into allocated 2963 * variables in BTF to pass kernel verification; we do this by 2964 * pretending that each extern is a 8-byte variable 2965 */ 2966 if (ksym_sec) { 2967 /* find existing 4-byte integer type in BTF to use for fake 2968 * extern variables in DATASEC 2969 */ 2970 int int_btf_id = find_int_btf_id(obj->btf); |
2862 | 2971 |
2863 sec = (struct btf_type *)btf__type_by_id(obj->btf, btf_id); 2864 sec->size = off; 2865 n = btf_vlen(sec); 2866 for (i = 0; i < n; i++) { 2867 struct btf_var_secinfo *vs = btf_var_secinfos(sec) + i; | 2972 for (i = 0; i < obj->nr_extern; i++) { 2973 ext = &obj->externs[i]; 2974 if (ext->type != EXT_KSYM) 2975 continue; 2976 pr_debug("extern (ksym) #%d: symbol %d, name %s\n", 2977 i, ext->sym_idx, ext->name); 2978 } |
2868 | 2979 |
2869 t = btf__type_by_id(obj->btf, vs->type); 2870 ext_name = btf__name_by_offset(obj->btf, t->name_off); 2871 ext = find_extern_by_name(obj, ext_name); 2872 if (!ext) { 2873 pr_warn("failed to find extern definition for BTF var '%s'\n", 2874 ext_name); 2875 return -ESRCH; | 2980 sec = ksym_sec; 2981 n = btf_vlen(sec); 2982 for (i = 0, off = 0; i < n; i++, off += sizeof(int)) { 2983 struct btf_var_secinfo *vs = btf_var_secinfos(sec) + i; 2984 struct btf_type *vt; 2985 2986 vt = (void *)btf__type_by_id(obj->btf, vs->type); 2987 ext_name = btf__name_by_offset(obj->btf, vt->name_off); 2988 ext = find_extern_by_name(obj, ext_name); 2989 if (!ext) { 2990 pr_warn("failed to find extern definition for BTF var '%s'\n", 2991 ext_name); 2992 return -ESRCH; 2993 } 2994 btf_var(vt)->linkage = BTF_VAR_GLOBAL_ALLOCATED; 2995 vt->type = int_btf_id; 2996 vs->offset = off; 2997 vs->size = sizeof(int); |
2876 } | 2998 } |
2877 vs->offset = ext->data_off; 2878 btf_var(t)->linkage = BTF_VAR_GLOBAL_ALLOCATED; | 2999 sec->size = off; |
2879 } 2880 | 3000 } 3001 |
3002 if (kcfg_sec) { 3003 sec = kcfg_sec; 3004 /* for kcfg externs calculate their offsets within a .kconfig map */ 3005 off = 0; 3006 for (i = 0; i < obj->nr_extern; i++) { 3007 ext = &obj->externs[i]; 3008 if (ext->type != EXT_KCFG) 3009 continue; 3010 3011 ext->kcfg.data_off = roundup(off, ext->kcfg.align); 3012 off = ext->kcfg.data_off + ext->kcfg.sz; 3013 pr_debug("extern (kcfg) #%d: symbol %d, off %u, name %s\n", 3014 i, ext->sym_idx, ext->kcfg.data_off, ext->name); 3015 } 3016 sec->size = off; 3017 n = btf_vlen(sec); 3018 for (i = 0; i < n; i++) { 3019 struct btf_var_secinfo *vs = btf_var_secinfos(sec) + i; 3020 3021 t = btf__type_by_id(obj->btf, vs->type); 3022 ext_name = btf__name_by_offset(obj->btf, t->name_off); 3023 ext = find_extern_by_name(obj, ext_name); 3024 if (!ext) { 3025 pr_warn("failed to find extern definition for BTF var '%s'\n", 3026 ext_name); 3027 return -ESRCH; 3028 } 3029 btf_var(t)->linkage = BTF_VAR_GLOBAL_ALLOCATED; 3030 vs->offset = ext->kcfg.data_off; 3031 } 3032 } |
|
2881 return 0; 2882} 2883 2884static struct bpf_program * 2885bpf_object__find_prog_by_idx(struct bpf_object *obj, int idx) 2886{ 2887 struct bpf_program *prog; 2888 size_t i; --- 113 unchanged lines hidden (view full) --- 3002 if (ext->sym_idx == sym_idx) 3003 break; 3004 } 3005 if (i >= n) { 3006 pr_warn("extern relo failed to find extern for sym %d\n", 3007 sym_idx); 3008 return -LIBBPF_ERRNO__RELOC; 3009 } | 3033 return 0; 3034} 3035 3036static struct bpf_program * 3037bpf_object__find_prog_by_idx(struct bpf_object *obj, int idx) 3038{ 3039 struct bpf_program *prog; 3040 size_t i; --- 113 unchanged lines hidden (view full) --- 3154 if (ext->sym_idx == sym_idx) 3155 break; 3156 } 3157 if (i >= n) { 3158 pr_warn("extern relo failed to find extern for sym %d\n", 3159 sym_idx); 3160 return -LIBBPF_ERRNO__RELOC; 3161 } |
3010 pr_debug("found extern #%d '%s' (sym %d, off %u) for insn %u\n", 3011 i, ext->name, ext->sym_idx, ext->data_off, insn_idx); | 3162 pr_debug("found extern #%d '%s' (sym %d) for insn %u\n", 3163 i, ext->name, ext->sym_idx, insn_idx); |
3012 reloc_desc->type = RELO_EXTERN; 3013 reloc_desc->insn_idx = insn_idx; | 3164 reloc_desc->type = RELO_EXTERN; 3165 reloc_desc->insn_idx = insn_idx; |
3014 reloc_desc->sym_off = ext->data_off; | 3166 reloc_desc->sym_off = i; /* sym_off stores extern index */ |
3015 return 0; 3016 } 3017 3018 if (!shdr_idx || shdr_idx >= SHN_LORESERVE) { 3019 pr_warn("invalid relo for \'%s\' in special section 0x%x; forgot to initialize global var?..\n", 3020 name, shdr_idx); 3021 return -LIBBPF_ERRNO__RELOC; 3022 } --- 194 unchanged lines hidden (view full) --- 3217 3218err_close_new_fd: 3219 close(new_fd); 3220err_free_new_name: 3221 free(new_name); 3222 return err; 3223} 3224 | 3167 return 0; 3168 } 3169 3170 if (!shdr_idx || shdr_idx >= SHN_LORESERVE) { 3171 pr_warn("invalid relo for \'%s\' in special section 0x%x; forgot to initialize global var?..\n", 3172 name, shdr_idx); 3173 return -LIBBPF_ERRNO__RELOC; 3174 } --- 194 unchanged lines hidden (view full) --- 3369 3370err_close_new_fd: 3371 close(new_fd); 3372err_free_new_name: 3373 free(new_name); 3374 return err; 3375} 3376 |
3225int bpf_map__resize(struct bpf_map *map, __u32 max_entries) | 3377__u32 bpf_map__max_entries(const struct bpf_map *map) |
3226{ | 3378{ |
3227 if (!map || !max_entries) 3228 return -EINVAL; | 3379 return map->def.max_entries; 3380} |
3229 | 3381 |
3230 /* If map already created, its attributes can't be changed. */ | 3382int bpf_map__set_max_entries(struct bpf_map *map, __u32 max_entries) 3383{ |
3231 if (map->fd >= 0) 3232 return -EBUSY; | 3384 if (map->fd >= 0) 3385 return -EBUSY; |
3233 | |
3234 map->def.max_entries = max_entries; | 3386 map->def.max_entries = max_entries; |
3235 | |
3236 return 0; 3237} 3238 | 3387 return 0; 3388} 3389 |
3390int bpf_map__resize(struct bpf_map *map, __u32 max_entries) 3391{ 3392 if (!map || !max_entries) 3393 return -EINVAL; 3394 3395 return bpf_map__set_max_entries(map, max_entries); 3396} 3397 |
|
3239static int 3240bpf_object__probe_loading(struct bpf_object *obj) 3241{ 3242 struct bpf_load_program_attr attr; 3243 char *cp, errmsg[STRERR_BUFSIZE]; 3244 struct bpf_insn insns[] = { 3245 BPF_MOV64_IMM(BPF_REG_0, 0), 3246 BPF_EXIT_INSN(), --- 351 unchanged lines hidden (view full) --- 3598 3599 if (obj->caps.name) 3600 create_attr.name = map->name; 3601 create_attr.map_ifindex = map->map_ifindex; 3602 create_attr.map_type = def->type; 3603 create_attr.map_flags = def->map_flags; 3604 create_attr.key_size = def->key_size; 3605 create_attr.value_size = def->value_size; | 3398static int 3399bpf_object__probe_loading(struct bpf_object *obj) 3400{ 3401 struct bpf_load_program_attr attr; 3402 char *cp, errmsg[STRERR_BUFSIZE]; 3403 struct bpf_insn insns[] = { 3404 BPF_MOV64_IMM(BPF_REG_0, 0), 3405 BPF_EXIT_INSN(), --- 351 unchanged lines hidden (view full) --- 3757 3758 if (obj->caps.name) 3759 create_attr.name = map->name; 3760 create_attr.map_ifindex = map->map_ifindex; 3761 create_attr.map_type = def->type; 3762 create_attr.map_flags = def->map_flags; 3763 create_attr.key_size = def->key_size; 3764 create_attr.value_size = def->value_size; |
3765 create_attr.numa_node = map->numa_node; |
|
3606 3607 if (def->type == BPF_MAP_TYPE_PERF_EVENT_ARRAY && !def->max_entries) { 3608 int nr_cpus; 3609 3610 nr_cpus = libbpf_num_possible_cpus(); 3611 if (nr_cpus < 0) { 3612 pr_warn("map '%s': failed to determine number of system CPUs: %d\n", 3613 map->name, nr_cpus); --- 7 unchanged lines hidden (view full) --- 3621 3622 if (bpf_map__is_struct_ops(map)) 3623 create_attr.btf_vmlinux_value_type_id = 3624 map->btf_vmlinux_value_type_id; 3625 3626 create_attr.btf_fd = 0; 3627 create_attr.btf_key_type_id = 0; 3628 create_attr.btf_value_type_id = 0; | 3766 3767 if (def->type == BPF_MAP_TYPE_PERF_EVENT_ARRAY && !def->max_entries) { 3768 int nr_cpus; 3769 3770 nr_cpus = libbpf_num_possible_cpus(); 3771 if (nr_cpus < 0) { 3772 pr_warn("map '%s': failed to determine number of system CPUs: %d\n", 3773 map->name, nr_cpus); --- 7 unchanged lines hidden (view full) --- 3781 3782 if (bpf_map__is_struct_ops(map)) 3783 create_attr.btf_vmlinux_value_type_id = 3784 map->btf_vmlinux_value_type_id; 3785 3786 create_attr.btf_fd = 0; 3787 create_attr.btf_key_type_id = 0; 3788 create_attr.btf_value_type_id = 0; |
3629 if (obj->btf && !bpf_map_find_btf_info(obj, map)) { | 3789 if (obj->btf && btf__fd(obj->btf) >= 0 && !bpf_map_find_btf_info(obj, map)) { |
3630 create_attr.btf_fd = btf__fd(obj->btf); 3631 create_attr.btf_key_type_id = map->btf_key_type_id; 3632 create_attr.btf_value_type_id = map->btf_value_type_id; 3633 } 3634 3635 if (bpf_map_type__is_map_in_map(def->type)) { 3636 if (map->inner_map) { 3637 int err; --- 1156 unchanged lines hidden (view full) --- 4794 struct bpf_program *prog; 4795 struct btf *targ_btf; 4796 const char *sec_name; 4797 int i, err = 0; 4798 4799 if (targ_btf_path) 4800 targ_btf = btf__parse_elf(targ_btf_path, NULL); 4801 else | 3790 create_attr.btf_fd = btf__fd(obj->btf); 3791 create_attr.btf_key_type_id = map->btf_key_type_id; 3792 create_attr.btf_value_type_id = map->btf_value_type_id; 3793 } 3794 3795 if (bpf_map_type__is_map_in_map(def->type)) { 3796 if (map->inner_map) { 3797 int err; --- 1156 unchanged lines hidden (view full) --- 4954 struct bpf_program *prog; 4955 struct btf *targ_btf; 4956 const char *sec_name; 4957 int i, err = 0; 4958 4959 if (targ_btf_path) 4960 targ_btf = btf__parse_elf(targ_btf_path, NULL); 4961 else |
4802 targ_btf = libbpf_find_kernel_btf(); 4803 if (IS_ERR(targ_btf)) { | 4962 targ_btf = obj->btf_vmlinux; 4963 if (IS_ERR_OR_NULL(targ_btf)) { |
4804 pr_warn("failed to get target BTF: %ld\n", PTR_ERR(targ_btf)); 4805 return PTR_ERR(targ_btf); 4806 } 4807 4808 cand_cache = hashmap__new(bpf_core_hash_fn, bpf_core_equal_fn, NULL); 4809 if (IS_ERR(cand_cache)) { 4810 err = PTR_ERR(cand_cache); 4811 goto out; --- 30 unchanged lines hidden (view full) --- 4842 pr_warn("prog '%s': relo #%d: failed to relocate: %d\n", 4843 sec_name, i, err); 4844 goto out; 4845 } 4846 } 4847 } 4848 4849out: | 4964 pr_warn("failed to get target BTF: %ld\n", PTR_ERR(targ_btf)); 4965 return PTR_ERR(targ_btf); 4966 } 4967 4968 cand_cache = hashmap__new(bpf_core_hash_fn, bpf_core_equal_fn, NULL); 4969 if (IS_ERR(cand_cache)) { 4970 err = PTR_ERR(cand_cache); 4971 goto out; --- 30 unchanged lines hidden (view full) --- 5002 pr_warn("prog '%s': relo #%d: failed to relocate: %d\n", 5003 sec_name, i, err); 5004 goto out; 5005 } 5006 } 5007 } 5008 5009out: |
4850 btf__free(targ_btf); | 5010 /* obj->btf_vmlinux is freed at the end of object load phase */ 5011 if (targ_btf != obj->btf_vmlinux) 5012 btf__free(targ_btf); |
4851 if (!IS_ERR_OR_NULL(cand_cache)) { 4852 hashmap__for_each_entry(cand_cache, entry, i) { 4853 bpf_core_free_cands(entry->value); 4854 } 4855 hashmap__free(cand_cache); 4856 } 4857 return err; 4858} --- 70 unchanged lines hidden (view full) --- 4929 } 4930 4931 if (!prog->reloc_desc) 4932 return 0; 4933 4934 for (i = 0; i < prog->nr_reloc; i++) { 4935 struct reloc_desc *relo = &prog->reloc_desc[i]; 4936 struct bpf_insn *insn = &prog->insns[relo->insn_idx]; | 5013 if (!IS_ERR_OR_NULL(cand_cache)) { 5014 hashmap__for_each_entry(cand_cache, entry, i) { 5015 bpf_core_free_cands(entry->value); 5016 } 5017 hashmap__free(cand_cache); 5018 } 5019 return err; 5020} --- 70 unchanged lines hidden (view full) --- 5091 } 5092 5093 if (!prog->reloc_desc) 5094 return 0; 5095 5096 for (i = 0; i < prog->nr_reloc; i++) { 5097 struct reloc_desc *relo = &prog->reloc_desc[i]; 5098 struct bpf_insn *insn = &prog->insns[relo->insn_idx]; |
5099 struct extern_desc *ext; |
|
4937 4938 if (relo->insn_idx + 1 >= (int)prog->insns_cnt) { 4939 pr_warn("relocation out of range: '%s'\n", 4940 prog->section_name); 4941 return -LIBBPF_ERRNO__RELOC; 4942 } 4943 4944 switch (relo->type) { 4945 case RELO_LD64: 4946 insn[0].src_reg = BPF_PSEUDO_MAP_FD; 4947 insn[0].imm = obj->maps[relo->map_idx].fd; 4948 break; 4949 case RELO_DATA: 4950 insn[0].src_reg = BPF_PSEUDO_MAP_VALUE; 4951 insn[1].imm = insn[0].imm + relo->sym_off; 4952 insn[0].imm = obj->maps[relo->map_idx].fd; 4953 break; 4954 case RELO_EXTERN: | 5100 5101 if (relo->insn_idx + 1 >= (int)prog->insns_cnt) { 5102 pr_warn("relocation out of range: '%s'\n", 5103 prog->section_name); 5104 return -LIBBPF_ERRNO__RELOC; 5105 } 5106 5107 switch (relo->type) { 5108 case RELO_LD64: 5109 insn[0].src_reg = BPF_PSEUDO_MAP_FD; 5110 insn[0].imm = obj->maps[relo->map_idx].fd; 5111 break; 5112 case RELO_DATA: 5113 insn[0].src_reg = BPF_PSEUDO_MAP_VALUE; 5114 insn[1].imm = insn[0].imm + relo->sym_off; 5115 insn[0].imm = obj->maps[relo->map_idx].fd; 5116 break; 5117 case RELO_EXTERN: |
4955 insn[0].src_reg = BPF_PSEUDO_MAP_VALUE; 4956 insn[0].imm = obj->maps[obj->kconfig_map_idx].fd; 4957 insn[1].imm = relo->sym_off; | 5118 ext = &obj->externs[relo->sym_off]; 5119 if (ext->type == EXT_KCFG) { 5120 insn[0].src_reg = BPF_PSEUDO_MAP_VALUE; 5121 insn[0].imm = obj->maps[obj->kconfig_map_idx].fd; 5122 insn[1].imm = ext->kcfg.data_off; 5123 } else /* EXT_KSYM */ { 5124 insn[0].imm = (__u32)ext->ksym.addr; 5125 insn[1].imm = ext->ksym.addr >> 32; 5126 } |
4958 break; 4959 case RELO_CALL: 4960 err = bpf_program__reloc_text(prog, obj, relo); 4961 if (err) 4962 return err; 4963 break; 4964 default: 4965 pr_warn("relo #%d: bad relo type %d\n", i, relo->type); --- 236 unchanged lines hidden (view full) --- 5202 } else if (prog->type == BPF_PROG_TYPE_TRACING || 5203 prog->type == BPF_PROG_TYPE_EXT) { 5204 load_attr.attach_prog_fd = prog->attach_prog_fd; 5205 load_attr.attach_btf_id = prog->attach_btf_id; 5206 } else { 5207 load_attr.kern_version = kern_version; 5208 load_attr.prog_ifindex = prog->prog_ifindex; 5209 } | 5127 break; 5128 case RELO_CALL: 5129 err = bpf_program__reloc_text(prog, obj, relo); 5130 if (err) 5131 return err; 5132 break; 5133 default: 5134 pr_warn("relo #%d: bad relo type %d\n", i, relo->type); --- 236 unchanged lines hidden (view full) --- 5371 } else if (prog->type == BPF_PROG_TYPE_TRACING || 5372 prog->type == BPF_PROG_TYPE_EXT) { 5373 load_attr.attach_prog_fd = prog->attach_prog_fd; 5374 load_attr.attach_btf_id = prog->attach_btf_id; 5375 } else { 5376 load_attr.kern_version = kern_version; 5377 load_attr.prog_ifindex = prog->prog_ifindex; 5378 } |
5210 /* if .BTF.ext was loaded, kernel supports associated BTF for prog */ 5211 if (prog->obj->btf_ext) 5212 btf_fd = bpf_object__btf_fd(prog->obj); 5213 else 5214 btf_fd = -1; 5215 load_attr.prog_btf_fd = btf_fd >= 0 ? btf_fd : 0; 5216 load_attr.func_info = prog->func_info; 5217 load_attr.func_info_rec_size = prog->func_info_rec_size; 5218 load_attr.func_info_cnt = prog->func_info_cnt; 5219 load_attr.line_info = prog->line_info; 5220 load_attr.line_info_rec_size = prog->line_info_rec_size; 5221 load_attr.line_info_cnt = prog->line_info_cnt; | 5379 /* specify func_info/line_info only if kernel supports them */ 5380 btf_fd = bpf_object__btf_fd(prog->obj); 5381 if (btf_fd >= 0 && prog->obj->caps.btf_func) { 5382 load_attr.prog_btf_fd = btf_fd; 5383 load_attr.func_info = prog->func_info; 5384 load_attr.func_info_rec_size = prog->func_info_rec_size; 5385 load_attr.func_info_cnt = prog->func_info_cnt; 5386 load_attr.line_info = prog->line_info; 5387 load_attr.line_info_rec_size = prog->line_info_rec_size; 5388 load_attr.line_info_cnt = prog->line_info_cnt; 5389 } |
5222 load_attr.log_level = prog->log_level; 5223 load_attr.prog_flags = prog->prog_flags; 5224 5225retry_load: 5226 if (log_buf_size) { 5227 log_buf = malloc(log_buf_size); 5228 if (!log_buf) 5229 return -ENOMEM; --- 52 unchanged lines hidden (view full) --- 5282} 5283 5284static int libbpf_find_attach_btf_id(struct bpf_program *prog); 5285 5286int bpf_program__load(struct bpf_program *prog, char *license, __u32 kern_ver) 5287{ 5288 int err = 0, fd, i, btf_id; 5289 | 5390 load_attr.log_level = prog->log_level; 5391 load_attr.prog_flags = prog->prog_flags; 5392 5393retry_load: 5394 if (log_buf_size) { 5395 log_buf = malloc(log_buf_size); 5396 if (!log_buf) 5397 return -ENOMEM; --- 52 unchanged lines hidden (view full) --- 5450} 5451 5452static int libbpf_find_attach_btf_id(struct bpf_program *prog); 5453 5454int bpf_program__load(struct bpf_program *prog, char *license, __u32 kern_ver) 5455{ 5456 int err = 0, fd, i, btf_id; 5457 |
5458 if (prog->obj->loaded) { 5459 pr_warn("prog '%s'('%s'): can't load after object was loaded\n", 5460 prog->name, prog->section_name); 5461 return -EINVAL; 5462 } 5463 |
|
5290 if ((prog->type == BPF_PROG_TYPE_TRACING || 5291 prog->type == BPF_PROG_TYPE_LSM || 5292 prog->type == BPF_PROG_TYPE_EXT) && !prog->attach_btf_id) { 5293 btf_id = libbpf_find_attach_btf_id(prog); 5294 if (btf_id <= 0) 5295 return btf_id; 5296 prog->attach_btf_id = btf_id; 5297 } --- 72 unchanged lines hidden (view full) --- 5370 const struct bpf_object *obj) 5371{ 5372 return prog->idx == obj->efile.text_shndx && obj->has_pseudo_calls; 5373} 5374 5375static int 5376bpf_object__load_progs(struct bpf_object *obj, int log_level) 5377{ | 5464 if ((prog->type == BPF_PROG_TYPE_TRACING || 5465 prog->type == BPF_PROG_TYPE_LSM || 5466 prog->type == BPF_PROG_TYPE_EXT) && !prog->attach_btf_id) { 5467 btf_id = libbpf_find_attach_btf_id(prog); 5468 if (btf_id <= 0) 5469 return btf_id; 5470 prog->attach_btf_id = btf_id; 5471 } --- 72 unchanged lines hidden (view full) --- 5544 const struct bpf_object *obj) 5545{ 5546 return prog->idx == obj->efile.text_shndx && obj->has_pseudo_calls; 5547} 5548 5549static int 5550bpf_object__load_progs(struct bpf_object *obj, int log_level) 5551{ |
5552 struct bpf_program *prog; |
|
5378 size_t i; 5379 int err; 5380 5381 for (i = 0; i < obj->nr_programs; i++) { | 5553 size_t i; 5554 int err; 5555 5556 for (i = 0; i < obj->nr_programs; i++) { |
5382 if (bpf_program__is_function_storage(&obj->programs[i], obj)) | 5557 prog = &obj->programs[i]; 5558 if (bpf_program__is_function_storage(prog, obj)) |
5383 continue; | 5559 continue; |
5384 obj->programs[i].log_level |= log_level; 5385 err = bpf_program__load(&obj->programs[i], 5386 obj->license, 5387 obj->kern_version); | 5560 if (!prog->load) { 5561 pr_debug("prog '%s'('%s'): skipped loading\n", 5562 prog->name, prog->section_name); 5563 continue; 5564 } 5565 prog->log_level |= log_level; 5566 err = bpf_program__load(prog, obj->license, obj->kern_version); |
5388 if (err) 5389 return err; 5390 } 5391 return 0; 5392} 5393 5394static const struct bpf_sec_def *find_sec_def(const char *sec_name); 5395 --- 172 unchanged lines hidden (view full) --- 5568 } 5569 if (!obj->caps.array_mmap) 5570 m->def.map_flags ^= BPF_F_MMAPABLE; 5571 } 5572 5573 return 0; 5574} 5575 | 5567 if (err) 5568 return err; 5569 } 5570 return 0; 5571} 5572 5573static const struct bpf_sec_def *find_sec_def(const char *sec_name); 5574 --- 172 unchanged lines hidden (view full) --- 5747 } 5748 if (!obj->caps.array_mmap) 5749 m->def.map_flags ^= BPF_F_MMAPABLE; 5750 } 5751 5752 return 0; 5753} 5754 |
5755static int bpf_object__read_kallsyms_file(struct bpf_object *obj) 5756{ 5757 char sym_type, sym_name[500]; 5758 unsigned long long sym_addr; 5759 struct extern_desc *ext; 5760 int ret, err = 0; 5761 FILE *f; 5762 5763 f = fopen("/proc/kallsyms", "r"); 5764 if (!f) { 5765 err = -errno; 5766 pr_warn("failed to open /proc/kallsyms: %d\n", err); 5767 return err; 5768 } 5769 5770 while (true) { 5771 ret = fscanf(f, "%llx %c %499s%*[^\n]\n", 5772 &sym_addr, &sym_type, sym_name); 5773 if (ret == EOF && feof(f)) 5774 break; 5775 if (ret != 3) { 5776 pr_warn("failed to read kallsyms entry: %d\n", ret); 5777 err = -EINVAL; 5778 goto out; 5779 } 5780 5781 ext = find_extern_by_name(obj, sym_name); 5782 if (!ext || ext->type != EXT_KSYM) 5783 continue; 5784 5785 if (ext->is_set && ext->ksym.addr != sym_addr) { 5786 pr_warn("extern (ksym) '%s' resolution is ambiguous: 0x%llx or 0x%llx\n", 5787 sym_name, ext->ksym.addr, sym_addr); 5788 err = -EINVAL; 5789 goto out; 5790 } 5791 if (!ext->is_set) { 5792 ext->is_set = true; 5793 ext->ksym.addr = sym_addr; 5794 pr_debug("extern (ksym) %s=0x%llx\n", sym_name, sym_addr); 5795 } 5796 } 5797 5798out: 5799 fclose(f); 5800 return err; 5801} 5802 |
|
5576static int bpf_object__resolve_externs(struct bpf_object *obj, 5577 const char *extra_kconfig) 5578{ | 5803static int bpf_object__resolve_externs(struct bpf_object *obj, 5804 const char *extra_kconfig) 5805{ |
5579 bool need_config = false; | 5806 bool need_config = false, need_kallsyms = false; |
5580 struct extern_desc *ext; | 5807 struct extern_desc *ext; |
5808 void *kcfg_data = NULL; |
|
5581 int err, i; | 5809 int err, i; |
5582 void *data; | |
5583 5584 if (obj->nr_extern == 0) 5585 return 0; 5586 | 5810 5811 if (obj->nr_extern == 0) 5812 return 0; 5813 |
5587 data = obj->maps[obj->kconfig_map_idx].mmaped; | 5814 if (obj->kconfig_map_idx >= 0) 5815 kcfg_data = obj->maps[obj->kconfig_map_idx].mmaped; |
5588 5589 for (i = 0; i < obj->nr_extern; i++) { 5590 ext = &obj->externs[i]; 5591 | 5816 5817 for (i = 0; i < obj->nr_extern; i++) { 5818 ext = &obj->externs[i]; 5819 |
5592 if (strcmp(ext->name, "LINUX_KERNEL_VERSION") == 0) { 5593 void *ext_val = data + ext->data_off; | 5820 if (ext->type == EXT_KCFG && 5821 strcmp(ext->name, "LINUX_KERNEL_VERSION") == 0) { 5822 void *ext_val = kcfg_data + ext->kcfg.data_off; |
5594 __u32 kver = get_kernel_version(); 5595 5596 if (!kver) { 5597 pr_warn("failed to get kernel version\n"); 5598 return -EINVAL; 5599 } | 5823 __u32 kver = get_kernel_version(); 5824 5825 if (!kver) { 5826 pr_warn("failed to get kernel version\n"); 5827 return -EINVAL; 5828 } |
5600 err = set_ext_value_num(ext, ext_val, kver); | 5829 err = set_kcfg_value_num(ext, ext_val, kver); |
5601 if (err) 5602 return err; | 5830 if (err) 5831 return err; |
5603 pr_debug("extern %s=0x%x\n", ext->name, kver); 5604 } else if (strncmp(ext->name, "CONFIG_", 7) == 0) { | 5832 pr_debug("extern (kcfg) %s=0x%x\n", ext->name, kver); 5833 } else if (ext->type == EXT_KCFG && 5834 strncmp(ext->name, "CONFIG_", 7) == 0) { |
5605 need_config = true; | 5835 need_config = true; |
5836 } else if (ext->type == EXT_KSYM) { 5837 need_kallsyms = true; |
|
5606 } else { 5607 pr_warn("unrecognized extern '%s'\n", ext->name); 5608 return -EINVAL; 5609 } 5610 } 5611 if (need_config && extra_kconfig) { | 5838 } else { 5839 pr_warn("unrecognized extern '%s'\n", ext->name); 5840 return -EINVAL; 5841 } 5842 } 5843 if (need_config && extra_kconfig) { |
5612 err = bpf_object__read_kconfig_mem(obj, extra_kconfig, data); | 5844 err = bpf_object__read_kconfig_mem(obj, extra_kconfig, kcfg_data); |
5613 if (err) 5614 return -EINVAL; 5615 need_config = false; 5616 for (i = 0; i < obj->nr_extern; i++) { 5617 ext = &obj->externs[i]; | 5845 if (err) 5846 return -EINVAL; 5847 need_config = false; 5848 for (i = 0; i < obj->nr_extern; i++) { 5849 ext = &obj->externs[i]; |
5618 if (!ext->is_set) { | 5850 if (ext->type == EXT_KCFG && !ext->is_set) { |
5619 need_config = true; 5620 break; 5621 } 5622 } 5623 } 5624 if (need_config) { | 5851 need_config = true; 5852 break; 5853 } 5854 } 5855 } 5856 if (need_config) { |
5625 err = bpf_object__read_kconfig_file(obj, data); | 5857 err = bpf_object__read_kconfig_file(obj, kcfg_data); |
5626 if (err) 5627 return -EINVAL; 5628 } | 5858 if (err) 5859 return -EINVAL; 5860 } |
5861 if (need_kallsyms) { 5862 err = bpf_object__read_kallsyms_file(obj); 5863 if (err) 5864 return -EINVAL; 5865 } |
|
5629 for (i = 0; i < obj->nr_extern; i++) { 5630 ext = &obj->externs[i]; 5631 5632 if (!ext->is_set && !ext->is_weak) { 5633 pr_warn("extern %s (strong) not resolved\n", ext->name); 5634 return -ESRCH; 5635 } else if (!ext->is_set) { 5636 pr_debug("extern %s (weak) not resolved, defaulting to zero\n", --- 11 unchanged lines hidden (view full) --- 5648 5649 if (!attr) 5650 return -EINVAL; 5651 obj = attr->obj; 5652 if (!obj) 5653 return -EINVAL; 5654 5655 if (obj->loaded) { | 5866 for (i = 0; i < obj->nr_extern; i++) { 5867 ext = &obj->externs[i]; 5868 5869 if (!ext->is_set && !ext->is_weak) { 5870 pr_warn("extern %s (strong) not resolved\n", ext->name); 5871 return -ESRCH; 5872 } else if (!ext->is_set) { 5873 pr_debug("extern %s (weak) not resolved, defaulting to zero\n", --- 11 unchanged lines hidden (view full) --- 5885 5886 if (!attr) 5887 return -EINVAL; 5888 obj = attr->obj; 5889 if (!obj) 5890 return -EINVAL; 5891 5892 if (obj->loaded) { |
5656 pr_warn("object should not be loaded twice\n"); | 5893 pr_warn("object '%s': load can't be attempted twice\n", obj->name); |
5657 return -EINVAL; 5658 } 5659 | 5894 return -EINVAL; 5895 } 5896 |
5660 obj->loaded = true; 5661 | |
5662 err = bpf_object__probe_loading(obj); 5663 err = err ? : bpf_object__probe_caps(obj); 5664 err = err ? : bpf_object__resolve_externs(obj, obj->kconfig); 5665 err = err ? : bpf_object__sanitize_and_load_btf(obj); 5666 err = err ? : bpf_object__sanitize_maps(obj); 5667 err = err ? : bpf_object__load_vmlinux_btf(obj); 5668 err = err ? : bpf_object__init_kern_struct_ops_maps(obj); 5669 err = err ? : bpf_object__create_maps(obj); 5670 err = err ? : bpf_object__relocate(obj, attr->target_btf_path); 5671 err = err ? : bpf_object__load_progs(obj, attr->log_level); 5672 5673 btf__free(obj->btf_vmlinux); 5674 obj->btf_vmlinux = NULL; 5675 | 5897 err = bpf_object__probe_loading(obj); 5898 err = err ? : bpf_object__probe_caps(obj); 5899 err = err ? : bpf_object__resolve_externs(obj, obj->kconfig); 5900 err = err ? : bpf_object__sanitize_and_load_btf(obj); 5901 err = err ? : bpf_object__sanitize_maps(obj); 5902 err = err ? : bpf_object__load_vmlinux_btf(obj); 5903 err = err ? : bpf_object__init_kern_struct_ops_maps(obj); 5904 err = err ? : bpf_object__create_maps(obj); 5905 err = err ? : bpf_object__relocate(obj, attr->target_btf_path); 5906 err = err ? : bpf_object__load_progs(obj, attr->log_level); 5907 5908 btf__free(obj->btf_vmlinux); 5909 obj->btf_vmlinux = NULL; 5910 |
5911 obj->loaded = true; /* doesn't matter if successfully or not */ 5912 |
|
5676 if (err) 5677 goto out; 5678 5679 return 0; 5680out: 5681 /* unpin any maps that were auto-pinned during load */ 5682 for (i = 0; i < obj->nr_maps; i++) 5683 if (obj->maps[i].pinned && !obj->maps[i].reused) --- 578 unchanged lines hidden (view full) --- 6262 if (map->fd >= 0) 6263 zclose(map->fd); 6264} 6265 6266void bpf_object__close(struct bpf_object *obj) 6267{ 6268 size_t i; 6269 | 5913 if (err) 5914 goto out; 5915 5916 return 0; 5917out: 5918 /* unpin any maps that were auto-pinned during load */ 5919 for (i = 0; i < obj->nr_maps; i++) 5920 if (obj->maps[i].pinned && !obj->maps[i].reused) --- 578 unchanged lines hidden (view full) --- 6499 if (map->fd >= 0) 6500 zclose(map->fd); 6501} 6502 6503void bpf_object__close(struct bpf_object *obj) 6504{ 6505 size_t i; 6506 |
6270 if (!obj) | 6507 if (IS_ERR_OR_NULL(obj)) |
6271 return; 6272 6273 if (obj->clear_priv) 6274 obj->clear_priv(obj, obj->priv); 6275 6276 bpf_object__elf_finish(obj); 6277 bpf_object__unload(obj); 6278 btf__free(obj->btf); --- 161 unchanged lines hidden (view full) --- 6440 pr_warn("failed to strdup program title\n"); 6441 return ERR_PTR(-ENOMEM); 6442 } 6443 } 6444 6445 return title; 6446} 6447 | 6508 return; 6509 6510 if (obj->clear_priv) 6511 obj->clear_priv(obj, obj->priv); 6512 6513 bpf_object__elf_finish(obj); 6514 bpf_object__unload(obj); 6515 btf__free(obj->btf); --- 161 unchanged lines hidden (view full) --- 6677 pr_warn("failed to strdup program title\n"); 6678 return ERR_PTR(-ENOMEM); 6679 } 6680 } 6681 6682 return title; 6683} 6684 |
6685bool bpf_program__autoload(const struct bpf_program *prog) 6686{ 6687 return prog->load; 6688} 6689 6690int bpf_program__set_autoload(struct bpf_program *prog, bool autoload) 6691{ 6692 if (prog->obj->loaded) 6693 return -EINVAL; 6694 6695 prog->load = autoload; 6696 return 0; 6697} 6698 |
|
6448int bpf_program__fd(const struct bpf_program *prog) 6449{ 6450 return bpf_program__nth_fd(prog, 0); 6451} 6452 6453size_t bpf_program__size(const struct bpf_program *prog) 6454{ 6455 return prog->insns_cnt * sizeof(struct bpf_insn); --- 87 unchanged lines hidden (view full) --- 6543BPF_PROG_TYPE_FNS(sched_act, BPF_PROG_TYPE_SCHED_ACT); 6544BPF_PROG_TYPE_FNS(tracepoint, BPF_PROG_TYPE_TRACEPOINT); 6545BPF_PROG_TYPE_FNS(raw_tracepoint, BPF_PROG_TYPE_RAW_TRACEPOINT); 6546BPF_PROG_TYPE_FNS(xdp, BPF_PROG_TYPE_XDP); 6547BPF_PROG_TYPE_FNS(perf_event, BPF_PROG_TYPE_PERF_EVENT); 6548BPF_PROG_TYPE_FNS(tracing, BPF_PROG_TYPE_TRACING); 6549BPF_PROG_TYPE_FNS(struct_ops, BPF_PROG_TYPE_STRUCT_OPS); 6550BPF_PROG_TYPE_FNS(extension, BPF_PROG_TYPE_EXT); | 6699int bpf_program__fd(const struct bpf_program *prog) 6700{ 6701 return bpf_program__nth_fd(prog, 0); 6702} 6703 6704size_t bpf_program__size(const struct bpf_program *prog) 6705{ 6706 return prog->insns_cnt * sizeof(struct bpf_insn); --- 87 unchanged lines hidden (view full) --- 6794BPF_PROG_TYPE_FNS(sched_act, BPF_PROG_TYPE_SCHED_ACT); 6795BPF_PROG_TYPE_FNS(tracepoint, BPF_PROG_TYPE_TRACEPOINT); 6796BPF_PROG_TYPE_FNS(raw_tracepoint, BPF_PROG_TYPE_RAW_TRACEPOINT); 6797BPF_PROG_TYPE_FNS(xdp, BPF_PROG_TYPE_XDP); 6798BPF_PROG_TYPE_FNS(perf_event, BPF_PROG_TYPE_PERF_EVENT); 6799BPF_PROG_TYPE_FNS(tracing, BPF_PROG_TYPE_TRACING); 6800BPF_PROG_TYPE_FNS(struct_ops, BPF_PROG_TYPE_STRUCT_OPS); 6801BPF_PROG_TYPE_FNS(extension, BPF_PROG_TYPE_EXT); |
6802BPF_PROG_TYPE_FNS(sk_lookup, BPF_PROG_TYPE_SK_LOOKUP); |
|
6551 6552enum bpf_attach_type 6553bpf_program__get_expected_attach_type(struct bpf_program *prog) 6554{ 6555 return prog->expected_attach_type; 6556} 6557 6558void bpf_program__set_expected_attach_type(struct bpf_program *prog, --- 97 unchanged lines hidden (view full) --- 6656 .expected_attach_type = BPF_LSM_MAC, 6657 .attach_fn = attach_lsm), 6658 SEC_DEF("iter/", TRACING, 6659 .expected_attach_type = BPF_TRACE_ITER, 6660 .is_attach_btf = true, 6661 .attach_fn = attach_iter), 6662 BPF_EAPROG_SEC("xdp_devmap/", BPF_PROG_TYPE_XDP, 6663 BPF_XDP_DEVMAP), | 6803 6804enum bpf_attach_type 6805bpf_program__get_expected_attach_type(struct bpf_program *prog) 6806{ 6807 return prog->expected_attach_type; 6808} 6809 6810void bpf_program__set_expected_attach_type(struct bpf_program *prog, --- 97 unchanged lines hidden (view full) --- 6908 .expected_attach_type = BPF_LSM_MAC, 6909 .attach_fn = attach_lsm), 6910 SEC_DEF("iter/", TRACING, 6911 .expected_attach_type = BPF_TRACE_ITER, 6912 .is_attach_btf = true, 6913 .attach_fn = attach_iter), 6914 BPF_EAPROG_SEC("xdp_devmap/", BPF_PROG_TYPE_XDP, 6915 BPF_XDP_DEVMAP), |
6664 BPF_PROG_SEC("xdp", BPF_PROG_TYPE_XDP), | 6916 BPF_EAPROG_SEC("xdp_cpumap/", BPF_PROG_TYPE_XDP, 6917 BPF_XDP_CPUMAP), 6918 BPF_EAPROG_SEC("xdp", BPF_PROG_TYPE_XDP, 6919 BPF_XDP), |
6665 BPF_PROG_SEC("perf_event", BPF_PROG_TYPE_PERF_EVENT), 6666 BPF_PROG_SEC("lwt_in", BPF_PROG_TYPE_LWT_IN), 6667 BPF_PROG_SEC("lwt_out", BPF_PROG_TYPE_LWT_OUT), 6668 BPF_PROG_SEC("lwt_xmit", BPF_PROG_TYPE_LWT_XMIT), 6669 BPF_PROG_SEC("lwt_seg6local", BPF_PROG_TYPE_LWT_SEG6LOCAL), 6670 BPF_APROG_SEC("cgroup_skb/ingress", BPF_PROG_TYPE_CGROUP_SKB, 6671 BPF_CGROUP_INET_INGRESS), 6672 BPF_APROG_SEC("cgroup_skb/egress", BPF_PROG_TYPE_CGROUP_SKB, 6673 BPF_CGROUP_INET_EGRESS), 6674 BPF_APROG_COMPAT("cgroup/skb", BPF_PROG_TYPE_CGROUP_SKB), | 6920 BPF_PROG_SEC("perf_event", BPF_PROG_TYPE_PERF_EVENT), 6921 BPF_PROG_SEC("lwt_in", BPF_PROG_TYPE_LWT_IN), 6922 BPF_PROG_SEC("lwt_out", BPF_PROG_TYPE_LWT_OUT), 6923 BPF_PROG_SEC("lwt_xmit", BPF_PROG_TYPE_LWT_XMIT), 6924 BPF_PROG_SEC("lwt_seg6local", BPF_PROG_TYPE_LWT_SEG6LOCAL), 6925 BPF_APROG_SEC("cgroup_skb/ingress", BPF_PROG_TYPE_CGROUP_SKB, 6926 BPF_CGROUP_INET_INGRESS), 6927 BPF_APROG_SEC("cgroup_skb/egress", BPF_PROG_TYPE_CGROUP_SKB, 6928 BPF_CGROUP_INET_EGRESS), 6929 BPF_APROG_COMPAT("cgroup/skb", BPF_PROG_TYPE_CGROUP_SKB), |
6930 BPF_EAPROG_SEC("cgroup/sock_create", BPF_PROG_TYPE_CGROUP_SOCK, 6931 BPF_CGROUP_INET_SOCK_CREATE), 6932 BPF_EAPROG_SEC("cgroup/sock_release", BPF_PROG_TYPE_CGROUP_SOCK, 6933 BPF_CGROUP_INET_SOCK_RELEASE), |
|
6675 BPF_APROG_SEC("cgroup/sock", BPF_PROG_TYPE_CGROUP_SOCK, 6676 BPF_CGROUP_INET_SOCK_CREATE), 6677 BPF_EAPROG_SEC("cgroup/post_bind4", BPF_PROG_TYPE_CGROUP_SOCK, 6678 BPF_CGROUP_INET4_POST_BIND), 6679 BPF_EAPROG_SEC("cgroup/post_bind6", BPF_PROG_TYPE_CGROUP_SOCK, 6680 BPF_CGROUP_INET6_POST_BIND), 6681 BPF_APROG_SEC("cgroup/dev", BPF_PROG_TYPE_CGROUP_DEVICE, 6682 BPF_CGROUP_DEVICE), --- 36 unchanged lines hidden (view full) --- 6719 BPF_CGROUP_INET6_GETSOCKNAME), 6720 BPF_EAPROG_SEC("cgroup/sysctl", BPF_PROG_TYPE_CGROUP_SYSCTL, 6721 BPF_CGROUP_SYSCTL), 6722 BPF_EAPROG_SEC("cgroup/getsockopt", BPF_PROG_TYPE_CGROUP_SOCKOPT, 6723 BPF_CGROUP_GETSOCKOPT), 6724 BPF_EAPROG_SEC("cgroup/setsockopt", BPF_PROG_TYPE_CGROUP_SOCKOPT, 6725 BPF_CGROUP_SETSOCKOPT), 6726 BPF_PROG_SEC("struct_ops", BPF_PROG_TYPE_STRUCT_OPS), | 6934 BPF_APROG_SEC("cgroup/sock", BPF_PROG_TYPE_CGROUP_SOCK, 6935 BPF_CGROUP_INET_SOCK_CREATE), 6936 BPF_EAPROG_SEC("cgroup/post_bind4", BPF_PROG_TYPE_CGROUP_SOCK, 6937 BPF_CGROUP_INET4_POST_BIND), 6938 BPF_EAPROG_SEC("cgroup/post_bind6", BPF_PROG_TYPE_CGROUP_SOCK, 6939 BPF_CGROUP_INET6_POST_BIND), 6940 BPF_APROG_SEC("cgroup/dev", BPF_PROG_TYPE_CGROUP_DEVICE, 6941 BPF_CGROUP_DEVICE), --- 36 unchanged lines hidden (view full) --- 6978 BPF_CGROUP_INET6_GETSOCKNAME), 6979 BPF_EAPROG_SEC("cgroup/sysctl", BPF_PROG_TYPE_CGROUP_SYSCTL, 6980 BPF_CGROUP_SYSCTL), 6981 BPF_EAPROG_SEC("cgroup/getsockopt", BPF_PROG_TYPE_CGROUP_SOCKOPT, 6982 BPF_CGROUP_GETSOCKOPT), 6983 BPF_EAPROG_SEC("cgroup/setsockopt", BPF_PROG_TYPE_CGROUP_SOCKOPT, 6984 BPF_CGROUP_SETSOCKOPT), 6985 BPF_PROG_SEC("struct_ops", BPF_PROG_TYPE_STRUCT_OPS), |
6986 BPF_EAPROG_SEC("sk_lookup/", BPF_PROG_TYPE_SK_LOOKUP, 6987 BPF_SK_LOOKUP), |
|
6727}; 6728 6729#undef BPF_PROG_SEC_IMPL 6730#undef BPF_PROG_SEC 6731#undef BPF_APROG_SEC 6732#undef BPF_EAPROG_SEC 6733#undef BPF_APROG_COMPAT 6734#undef SEC_DEF --- 354 unchanged lines hidden (view full) --- 7089 return map ? &map->def : ERR_PTR(-EINVAL); 7090} 7091 7092const char *bpf_map__name(const struct bpf_map *map) 7093{ 7094 return map ? map->name : NULL; 7095} 7096 | 6988}; 6989 6990#undef BPF_PROG_SEC_IMPL 6991#undef BPF_PROG_SEC 6992#undef BPF_APROG_SEC 6993#undef BPF_EAPROG_SEC 6994#undef BPF_APROG_COMPAT 6995#undef SEC_DEF --- 354 unchanged lines hidden (view full) --- 7350 return map ? &map->def : ERR_PTR(-EINVAL); 7351} 7352 7353const char *bpf_map__name(const struct bpf_map *map) 7354{ 7355 return map ? map->name : NULL; 7356} 7357 |
7358enum bpf_map_type bpf_map__type(const struct bpf_map *map) 7359{ 7360 return map->def.type; 7361} 7362 7363int bpf_map__set_type(struct bpf_map *map, enum bpf_map_type type) 7364{ 7365 if (map->fd >= 0) 7366 return -EBUSY; 7367 map->def.type = type; 7368 return 0; 7369} 7370 7371__u32 bpf_map__map_flags(const struct bpf_map *map) 7372{ 7373 return map->def.map_flags; 7374} 7375 7376int bpf_map__set_map_flags(struct bpf_map *map, __u32 flags) 7377{ 7378 if (map->fd >= 0) 7379 return -EBUSY; 7380 map->def.map_flags = flags; 7381 return 0; 7382} 7383 7384__u32 bpf_map__numa_node(const struct bpf_map *map) 7385{ 7386 return map->numa_node; 7387} 7388 7389int bpf_map__set_numa_node(struct bpf_map *map, __u32 numa_node) 7390{ 7391 if (map->fd >= 0) 7392 return -EBUSY; 7393 map->numa_node = numa_node; 7394 return 0; 7395} 7396 7397__u32 bpf_map__key_size(const struct bpf_map *map) 7398{ 7399 return map->def.key_size; 7400} 7401 7402int bpf_map__set_key_size(struct bpf_map *map, __u32 size) 7403{ 7404 if (map->fd >= 0) 7405 return -EBUSY; 7406 map->def.key_size = size; 7407 return 0; 7408} 7409 7410__u32 bpf_map__value_size(const struct bpf_map *map) 7411{ 7412 return map->def.value_size; 7413} 7414 7415int bpf_map__set_value_size(struct bpf_map *map, __u32 size) 7416{ 7417 if (map->fd >= 0) 7418 return -EBUSY; 7419 map->def.value_size = size; 7420 return 0; 7421} 7422 |
|
7097__u32 bpf_map__btf_key_type_id(const struct bpf_map *map) 7098{ 7099 return map ? map->btf_key_type_id : 0; 7100} 7101 7102__u32 bpf_map__btf_value_type_id(const struct bpf_map *map) 7103{ 7104 return map ? map->btf_value_type_id : 0; --- 36 unchanged lines hidden (view full) --- 7141 return map->def.type == BPF_MAP_TYPE_PERF_EVENT_ARRAY; 7142} 7143 7144bool bpf_map__is_internal(const struct bpf_map *map) 7145{ 7146 return map->libbpf_type != LIBBPF_MAP_UNSPEC; 7147} 7148 | 7423__u32 bpf_map__btf_key_type_id(const struct bpf_map *map) 7424{ 7425 return map ? map->btf_key_type_id : 0; 7426} 7427 7428__u32 bpf_map__btf_value_type_id(const struct bpf_map *map) 7429{ 7430 return map ? map->btf_value_type_id : 0; --- 36 unchanged lines hidden (view full) --- 7467 return map->def.type == BPF_MAP_TYPE_PERF_EVENT_ARRAY; 7468} 7469 7470bool bpf_map__is_internal(const struct bpf_map *map) 7471{ 7472 return map->libbpf_type != LIBBPF_MAP_UNSPEC; 7473} 7474 |
7149void bpf_map__set_ifindex(struct bpf_map *map, __u32 ifindex) | 7475__u32 bpf_map__ifindex(const struct bpf_map *map) |
7150{ | 7476{ |
7477 return map->map_ifindex; 7478} 7479 7480int bpf_map__set_ifindex(struct bpf_map *map, __u32 ifindex) 7481{ 7482 if (map->fd >= 0) 7483 return -EBUSY; |
|
7151 map->map_ifindex = ifindex; | 7484 map->map_ifindex = ifindex; |
7485 return 0; |
|
7152} 7153 7154int bpf_map__set_inner_map_fd(struct bpf_map *map, int fd) 7155{ 7156 if (!bpf_map_type__is_map_in_map(map->def.type)) { 7157 pr_warn("error: unsupported map type\n"); 7158 return -EINVAL; 7159 } --- 191 unchanged lines hidden (view full) --- 7351{ 7352 link->disconnected = true; 7353} 7354 7355int bpf_link__destroy(struct bpf_link *link) 7356{ 7357 int err = 0; 7358 | 7486} 7487 7488int bpf_map__set_inner_map_fd(struct bpf_map *map, int fd) 7489{ 7490 if (!bpf_map_type__is_map_in_map(map->def.type)) { 7491 pr_warn("error: unsupported map type\n"); 7492 return -EINVAL; 7493 } --- 191 unchanged lines hidden (view full) --- 7685{ 7686 link->disconnected = true; 7687} 7688 7689int bpf_link__destroy(struct bpf_link *link) 7690{ 7691 int err = 0; 7692 |
7359 if (!link) | 7693 if (IS_ERR_OR_NULL(link)) |
7360 return 0; 7361 7362 if (!link->disconnected && link->detach) 7363 err = link->detach(link); 7364 if (link->destroy) 7365 link->destroy(link); 7366 if (link->pin_path) 7367 free(link->pin_path); --- 127 unchanged lines hidden (view full) --- 7495 link->fd = pfd; 7496 7497 if (ioctl(pfd, PERF_EVENT_IOC_SET_BPF, prog_fd) < 0) { 7498 err = -errno; 7499 free(link); 7500 pr_warn("program '%s': failed to attach to pfd %d: %s\n", 7501 bpf_program__title(prog, false), pfd, 7502 libbpf_strerror_r(err, errmsg, sizeof(errmsg))); | 7694 return 0; 7695 7696 if (!link->disconnected && link->detach) 7697 err = link->detach(link); 7698 if (link->destroy) 7699 link->destroy(link); 7700 if (link->pin_path) 7701 free(link->pin_path); --- 127 unchanged lines hidden (view full) --- 7829 link->fd = pfd; 7830 7831 if (ioctl(pfd, PERF_EVENT_IOC_SET_BPF, prog_fd) < 0) { 7832 err = -errno; 7833 free(link); 7834 pr_warn("program '%s': failed to attach to pfd %d: %s\n", 7835 bpf_program__title(prog, false), pfd, 7836 libbpf_strerror_r(err, errmsg, sizeof(errmsg))); |
7837 if (err == -EPROTO) 7838 pr_warn("program '%s': try add PERF_SAMPLE_CALLCHAIN to or remove exclude_callchain_[kernel|user] from pfd %d\n", 7839 bpf_program__title(prog, false), pfd); |
|
7503 return ERR_PTR(err); 7504 } 7505 if (ioctl(pfd, PERF_EVENT_IOC_ENABLE, 0) < 0) { 7506 err = -errno; 7507 free(link); 7508 pr_warn("program '%s': failed to enable pfd %d: %s\n", 7509 bpf_program__title(prog, false), pfd, 7510 libbpf_strerror_r(err, errmsg, sizeof(errmsg))); --- 429 unchanged lines hidden (view full) --- 7940} 7941 7942struct bpf_link * 7943bpf_program__attach_netns(struct bpf_program *prog, int netns_fd) 7944{ 7945 return bpf_program__attach_fd(prog, netns_fd, "netns"); 7946} 7947 | 7840 return ERR_PTR(err); 7841 } 7842 if (ioctl(pfd, PERF_EVENT_IOC_ENABLE, 0) < 0) { 7843 err = -errno; 7844 free(link); 7845 pr_warn("program '%s': failed to enable pfd %d: %s\n", 7846 bpf_program__title(prog, false), pfd, 7847 libbpf_strerror_r(err, errmsg, sizeof(errmsg))); --- 429 unchanged lines hidden (view full) --- 8277} 8278 8279struct bpf_link * 8280bpf_program__attach_netns(struct bpf_program *prog, int netns_fd) 8281{ 8282 return bpf_program__attach_fd(prog, netns_fd, "netns"); 8283} 8284 |
8285struct bpf_link *bpf_program__attach_xdp(struct bpf_program *prog, int ifindex) 8286{ 8287 /* target_fd/target_ifindex use the same field in LINK_CREATE */ 8288 return bpf_program__attach_fd(prog, ifindex, "xdp"); 8289} 8290 |
|
7948struct bpf_link * 7949bpf_program__attach_iter(struct bpf_program *prog, 7950 const struct bpf_iter_attach_opts *opts) 7951{ | 8291struct bpf_link * 8292bpf_program__attach_iter(struct bpf_program *prog, 8293 const struct bpf_iter_attach_opts *opts) 8294{ |
8295 DECLARE_LIBBPF_OPTS(bpf_link_create_opts, link_create_opts); |
|
7952 char errmsg[STRERR_BUFSIZE]; 7953 struct bpf_link *link; 7954 int prog_fd, link_fd; | 8296 char errmsg[STRERR_BUFSIZE]; 8297 struct bpf_link *link; 8298 int prog_fd, link_fd; |
8299 __u32 target_fd = 0; |
|
7955 7956 if (!OPTS_VALID(opts, bpf_iter_attach_opts)) 7957 return ERR_PTR(-EINVAL); 7958 | 8300 8301 if (!OPTS_VALID(opts, bpf_iter_attach_opts)) 8302 return ERR_PTR(-EINVAL); 8303 |
8304 if (OPTS_HAS(opts, map_fd)) { 8305 target_fd = opts->map_fd; 8306 link_create_opts.flags = BPF_ITER_LINK_MAP_FD; 8307 } 8308 |
|
7959 prog_fd = bpf_program__fd(prog); 7960 if (prog_fd < 0) { 7961 pr_warn("program '%s': can't attach before loaded\n", 7962 bpf_program__title(prog, false)); 7963 return ERR_PTR(-EINVAL); 7964 } 7965 7966 link = calloc(1, sizeof(*link)); 7967 if (!link) 7968 return ERR_PTR(-ENOMEM); 7969 link->detach = &bpf_link__detach_fd; 7970 | 8309 prog_fd = bpf_program__fd(prog); 8310 if (prog_fd < 0) { 8311 pr_warn("program '%s': can't attach before loaded\n", 8312 bpf_program__title(prog, false)); 8313 return ERR_PTR(-EINVAL); 8314 } 8315 8316 link = calloc(1, sizeof(*link)); 8317 if (!link) 8318 return ERR_PTR(-ENOMEM); 8319 link->detach = &bpf_link__detach_fd; 8320 |
7971 link_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_ITER, NULL); | 8321 link_fd = bpf_link_create(prog_fd, target_fd, BPF_TRACE_ITER, 8322 &link_create_opts); |
7972 if (link_fd < 0) { 7973 link_fd = -errno; 7974 free(link); 7975 pr_warn("program '%s': failed to attach to iterator: %s\n", 7976 bpf_program__title(prog, false), 7977 libbpf_strerror_r(link_fd, errmsg, sizeof(errmsg))); 7978 return ERR_PTR(link_fd); 7979 } --- 166 unchanged lines hidden (view full) --- 8146 free(cpu_buf->buf); 8147 free(cpu_buf); 8148} 8149 8150void perf_buffer__free(struct perf_buffer *pb) 8151{ 8152 int i; 8153 | 8323 if (link_fd < 0) { 8324 link_fd = -errno; 8325 free(link); 8326 pr_warn("program '%s': failed to attach to iterator: %s\n", 8327 bpf_program__title(prog, false), 8328 libbpf_strerror_r(link_fd, errmsg, sizeof(errmsg))); 8329 return ERR_PTR(link_fd); 8330 } --- 166 unchanged lines hidden (view full) --- 8497 free(cpu_buf->buf); 8498 free(cpu_buf); 8499} 8500 8501void perf_buffer__free(struct perf_buffer *pb) 8502{ 8503 int i; 8504 |
8154 if (!pb) | 8505 if (IS_ERR_OR_NULL(pb)) |
8155 return; 8156 if (pb->cpu_bufs) { 8157 for (i = 0; i < pb->cpu_cnt; i++) { 8158 struct perf_cpu_buf *cpu_buf = pb->cpu_bufs[i]; 8159 8160 if (!cpu_buf) 8161 continue; 8162 --- 96 unchanged lines hidden (view full) --- 8259 8260 return __perf_buffer__new(map_fd, page_cnt, &p); 8261} 8262 8263static struct perf_buffer *__perf_buffer__new(int map_fd, size_t page_cnt, 8264 struct perf_buffer_params *p) 8265{ 8266 const char *online_cpus_file = "/sys/devices/system/cpu/online"; | 8506 return; 8507 if (pb->cpu_bufs) { 8508 for (i = 0; i < pb->cpu_cnt; i++) { 8509 struct perf_cpu_buf *cpu_buf = pb->cpu_bufs[i]; 8510 8511 if (!cpu_buf) 8512 continue; 8513 --- 96 unchanged lines hidden (view full) --- 8610 8611 return __perf_buffer__new(map_fd, page_cnt, &p); 8612} 8613 8614static struct perf_buffer *__perf_buffer__new(int map_fd, size_t page_cnt, 8615 struct perf_buffer_params *p) 8616{ 8617 const char *online_cpus_file = "/sys/devices/system/cpu/online"; |
8267 struct bpf_map_info map = {}; | 8618 struct bpf_map_info map; |
8268 char msg[STRERR_BUFSIZE]; 8269 struct perf_buffer *pb; 8270 bool *online = NULL; 8271 __u32 map_info_len; 8272 int err, i, j, n; 8273 8274 if (page_cnt & (page_cnt - 1)) { 8275 pr_warn("page count should be power of two, but is %zu\n", 8276 page_cnt); 8277 return ERR_PTR(-EINVAL); 8278 } 8279 | 8619 char msg[STRERR_BUFSIZE]; 8620 struct perf_buffer *pb; 8621 bool *online = NULL; 8622 __u32 map_info_len; 8623 int err, i, j, n; 8624 8625 if (page_cnt & (page_cnt - 1)) { 8626 pr_warn("page count should be power of two, but is %zu\n", 8627 page_cnt); 8628 return ERR_PTR(-EINVAL); 8629 } 8630 |
8631 /* best-effort sanity checks */ 8632 memset(&map, 0, sizeof(map)); |
|
8280 map_info_len = sizeof(map); 8281 err = bpf_obj_get_info_by_fd(map_fd, &map, &map_info_len); 8282 if (err) { 8283 err = -errno; | 8633 map_info_len = sizeof(map); 8634 err = bpf_obj_get_info_by_fd(map_fd, &map, &map_info_len); 8635 if (err) { 8636 err = -errno; |
8284 pr_warn("failed to get map info for map FD %d: %s\n", 8285 map_fd, libbpf_strerror_r(err, msg, sizeof(msg))); 8286 return ERR_PTR(err); | 8637 /* if BPF_OBJ_GET_INFO_BY_FD is supported, will return 8638 * -EBADFD, -EFAULT, or -E2BIG on real error 8639 */ 8640 if (err != -EINVAL) { 8641 pr_warn("failed to get map info for map FD %d: %s\n", 8642 map_fd, libbpf_strerror_r(err, msg, sizeof(msg))); 8643 return ERR_PTR(err); 8644 } 8645 pr_debug("failed to get map info for FD %d; API not supported? Ignoring...\n", 8646 map_fd); 8647 } else { 8648 if (map.type != BPF_MAP_TYPE_PERF_EVENT_ARRAY) { 8649 pr_warn("map '%s' should be BPF_MAP_TYPE_PERF_EVENT_ARRAY\n", 8650 map.name); 8651 return ERR_PTR(-EINVAL); 8652 } |
8287 } 8288 | 8653 } 8654 |
8289 if (map.type != BPF_MAP_TYPE_PERF_EVENT_ARRAY) { 8290 pr_warn("map '%s' should be BPF_MAP_TYPE_PERF_EVENT_ARRAY\n", 8291 map.name); 8292 return ERR_PTR(-EINVAL); 8293 } 8294 | |
8295 pb = calloc(1, sizeof(*pb)); 8296 if (!pb) 8297 return ERR_PTR(-ENOMEM); 8298 8299 pb->event_cb = p->event_cb; 8300 pb->sample_cb = p->sample_cb; 8301 pb->lost_cb = p->lost_cb; 8302 pb->ctx = p->ctx; --- 13 unchanged lines hidden (view full) --- 8316 if (p->cpu_cnt > 0) { 8317 pb->cpu_cnt = p->cpu_cnt; 8318 } else { 8319 pb->cpu_cnt = libbpf_num_possible_cpus(); 8320 if (pb->cpu_cnt < 0) { 8321 err = pb->cpu_cnt; 8322 goto error; 8323 } | 8655 pb = calloc(1, sizeof(*pb)); 8656 if (!pb) 8657 return ERR_PTR(-ENOMEM); 8658 8659 pb->event_cb = p->event_cb; 8660 pb->sample_cb = p->sample_cb; 8661 pb->lost_cb = p->lost_cb; 8662 pb->ctx = p->ctx; --- 13 unchanged lines hidden (view full) --- 8676 if (p->cpu_cnt > 0) { 8677 pb->cpu_cnt = p->cpu_cnt; 8678 } else { 8679 pb->cpu_cnt = libbpf_num_possible_cpus(); 8680 if (pb->cpu_cnt < 0) { 8681 err = pb->cpu_cnt; 8682 goto error; 8683 } |
8324 if (map.max_entries < pb->cpu_cnt) | 8684 if (map.max_entries && map.max_entries < pb->cpu_cnt) |
8325 pb->cpu_cnt = map.max_entries; 8326 } 8327 8328 pb->events = calloc(pb->cpu_cnt, sizeof(*pb->events)); 8329 if (!pb->events) { 8330 err = -ENOMEM; 8331 pr_warn("failed to allocate events: out of memory\n"); 8332 goto error; --- 656 unchanged lines hidden (view full) --- 8989 int i; 8990 8991 for (i = 0; i < s->prog_cnt; i++) { 8992 struct bpf_program *prog = *s->progs[i].prog; 8993 struct bpf_link **link = s->progs[i].link; 8994 const struct bpf_sec_def *sec_def; 8995 const char *sec_name = bpf_program__title(prog, false); 8996 | 8685 pb->cpu_cnt = map.max_entries; 8686 } 8687 8688 pb->events = calloc(pb->cpu_cnt, sizeof(*pb->events)); 8689 if (!pb->events) { 8690 err = -ENOMEM; 8691 pr_warn("failed to allocate events: out of memory\n"); 8692 goto error; --- 656 unchanged lines hidden (view full) --- 9349 int i; 9350 9351 for (i = 0; i < s->prog_cnt; i++) { 9352 struct bpf_program *prog = *s->progs[i].prog; 9353 struct bpf_link **link = s->progs[i].link; 9354 const struct bpf_sec_def *sec_def; 9355 const char *sec_name = bpf_program__title(prog, false); 9356 |
9357 if (!prog->load) 9358 continue; 9359 |
|
8997 sec_def = find_sec_def(sec_name); 8998 if (!sec_def || !sec_def->attach_fn) 8999 continue; 9000 9001 *link = sec_def->attach_fn(sec_def, prog); 9002 if (IS_ERR(*link)) { 9003 pr_warn("failed to auto-attach program '%s': %ld\n", 9004 bpf_program__name(prog), PTR_ERR(*link)); --- 6 unchanged lines hidden (view full) --- 9011 9012void bpf_object__detach_skeleton(struct bpf_object_skeleton *s) 9013{ 9014 int i; 9015 9016 for (i = 0; i < s->prog_cnt; i++) { 9017 struct bpf_link **link = s->progs[i].link; 9018 | 9360 sec_def = find_sec_def(sec_name); 9361 if (!sec_def || !sec_def->attach_fn) 9362 continue; 9363 9364 *link = sec_def->attach_fn(sec_def, prog); 9365 if (IS_ERR(*link)) { 9366 pr_warn("failed to auto-attach program '%s': %ld\n", 9367 bpf_program__name(prog), PTR_ERR(*link)); --- 6 unchanged lines hidden (view full) --- 9374 9375void bpf_object__detach_skeleton(struct bpf_object_skeleton *s) 9376{ 9377 int i; 9378 9379 for (i = 0; i < s->prog_cnt; i++) { 9380 struct bpf_link **link = s->progs[i].link; 9381 |
9019 if (!IS_ERR_OR_NULL(*link)) 9020 bpf_link__destroy(*link); | 9382 bpf_link__destroy(*link); |
9021 *link = NULL; 9022 } 9023} 9024 9025void bpf_object__destroy_skeleton(struct bpf_object_skeleton *s) 9026{ 9027 if (s->progs) 9028 bpf_object__detach_skeleton(s); 9029 if (s->obj) 9030 bpf_object__close(*s->obj); 9031 free(s->maps); 9032 free(s->progs); 9033 free(s); 9034} | 9383 *link = NULL; 9384 } 9385} 9386 9387void bpf_object__destroy_skeleton(struct bpf_object_skeleton *s) 9388{ 9389 if (s->progs) 9390 bpf_object__detach_skeleton(s); 9391 if (s->obj) 9392 bpf_object__close(*s->obj); 9393 free(s->maps); 9394 free(s->progs); 9395 free(s); 9396} |