Pular para o conteúdo principal
Versão: 1.1.X

Developer Notes for Access Gateway

This section provides a guide for anyone testing existing features, fixing a bug or adding a new feature to the Access Gateway. All developers are highly encouraged to maintain this guide to make sure it is up to date and continues to grow.

Configuration/system settings

If you have a gateway running in a VM (as described in the Quick Start Guide), the magma directory is shared between the guest and host machine, so changes made on either system reflect on the other. Exceptions to this rule are the systemd unit files and python scripts. Changes to these files on the guest or host need to be manually synced.

Configuration files/directories

  • /etc/magma/: location of default configurations and templates for all services
  • /etc/magma/gateway.mconfig: main file that contains the configuration for all services, as exposed via the Orc8r API
  • /var/opt/magma/configs/gateway.mconfig: For gateways connected to an orchestrator, the configuration from Orc8r is periodically streamed to the gateway and written here. This streamed config takes precedence over /etc/magma/gateway.mconfig.
  • /etc/magma/<service_name>.yml: Service configuration file, in YAML format. These configurations are local and are not exposed through the API. These include logging level, local network interface names, etc.
  • /etc/magma/templates/<service_name>.conf.template: This contains the structured template for the .conf file used as input to some services, such as Control-proxy, Dnsd, MME and Redis.
  • /var/opt/magma/tmp/<service_name>.conf: The configuration file read by some services, such as Control-proxy, Dnsd, MME and Redis, at start-up. This file is generated by mapping the configuration values from gateway.mconfig and <service.yml> to the template defined in <service_name>.conf.template.

Systemd unit configuration files

  • /etc/systemd/system/magma@<service_name>.service: Systemd unit files for Magma service. Note that these files are maintained under magma/lte/gateway/deploy/roles/magma/files/systemd/ and are copied into the /etc/systemd/system directory of the VM at the time of provisioning. You need to manually sync changes to these files between guest and host OS.

Python scripts to generate configs

  • generate_<service>_config.py: Scripts that generate the <service_name>.conf file for some services. These are executed every time a service starts. Note that these files are maintained under magma/lte/gateway/python/scripts and copied to the /usr/local/bin directory in the guest host at the time of provisioning. Changes to these scripts need to be manually synced between the guest and host OS.

Testing

Connecting a physical eNodeB and UE to Gateway VM

While the S1ap integration tests provide a simulated UE(s) and eNodeB(s) to test your AGW VM while actively developing, one can extend the testing to physical UE and eNodeB. To connect a physical eNodeB to the gateway VM:

  1. Connect the eNodeB to a port on the host machine, say it is interface en9.
  2. From the VirtualBox GUI, switch the Adapter 1 (for eth1 interface) from Host-only to Bridged mode and bridge it to interface en9 from above.
  3. In gateway VM, modify the nat_iface in /etc/magma/pipelined.yml from eth2 to eth0. Restart all services.
  4. In the gateway VM, follow the steps in EnodeB Configuration. Make sure the earfcn set in the enodebd section of gateway.mconfig is the one that is supported by the eNodeB under consideration.

To connect a physical UE to the gateway VM,

  1. Use a programmable SIM which is provisioned with the LTE auth key that you will use in the EPC.
  2. On the gateway VM, add the subscriber using the CLI:
  3. magtivate
  4. subscriber_cli.py add --lte-auth-key <base64 LTE auth key> IMSI<15 digit IMSI>
  5. On the UE, turn airplane mode on, then off, to trigger a fresh attach

Debugging

Logs

To change the logging level for a particular service, please modify the log_level in /etc/magma/<service_name>.yml

  • /var/log/syslog: gives a good view of all the Magma services running on the AGW. This is a good place to check whether the AGW is connecting to the orchestrator, any GRPC errors or which service is causing a cascaded crash (e.g. a crash in Sessiond can cause Mme service to terminate). A good way to filter the logs from individual processes is with journalctl. For example, to look at logs from SubscriberDb use: sudo journalctl -fu magma@subscriberdb

  • /var/log/mme.log is a symbolic link that points to the latest log file created by the MME service. The Mme service creates a new log file with name MME.magma-dev.root.log.INFO.<date>-<time>.<PID> every time the service is (re)started. The AGW maintains the 10 most recent log files.

  • /var/log/enodebd.log contains the logs from Enodebd

CLIs

Many services have a command line interface (CLI) that can be used for debugging and configuration. Check the [AGW Readme]((./readme_agw.md#command-line-interfaces) for more details.

Analyzing raw network packets

If you want to look at the raw packets sent over the network, tcpdump is a useful tool to capture and save them to a file. It is not installed by default on the access gateway, so you need to do sudo apt-get install tcpdump to install it. Next, you can:

  • Capture packets on the eNodeB interface: sudo tcpdump -i eth1 -w <file_name>
  • Capture packets on all interfaces (filter out SSH traffic so your local SSH session doesn't bloat the capture): sudo tcpdump -i any "not port 22" -w <file_name>

Analyzing coredumps

If any of the AGW services crash with a segmentation fault, a coredump is preserved in a directory named core-<timestamp>-<process_name>[-<PID>] under the /tmp directory on the AGW. For example:

  • core-1558015879-ITTI_bundle is a coredump from MME
  • core-1585761965-sessiond-12741_bundle is a coredump from sessiond
  • core-1582710288-python3-17823_bundle is a coredump from a python service running with PID 17823

For the coredumps generated by MME or sessiond, you can read them through gdb, as follows:

cd /tmp/<core-directory>/ gunzip <core gzip file> gdb /usr/local/bin/<process name> <unzipped core file>

From within the gdb shell, bt command will display the backtrace for the segmentation fault.

Running MME with gdb

If you need to debug MME with gdb, make sure all the services that it is dependent on are already running. Follow the steps below:

  1. sudo service magma@magmad start
  2. sudo service magma@mme stop
  3. sudo service sctpd start
  4. sudo service magma@mobilityd start
  5. sudo service magma@pipelined start
  6. sudo service magma@sessiond start
  7. sudo gdb /usr/local/bin/mme

Checking Redis entries for stateless services

When the services are running in stateless mode, as described in Testing stateless Access Gateway, you can connect to the redis service with redis-cli -p 6380. Then on the shell, you can list all the keys with KEYS *. The keys for state follow the pattern <IMSI>:<Service/MME task>. For example:

  • IMSI001010000000001:SPGW is the state preserved for IMSI 001010000000001 by SPGW task in Mme service
  • Keys such as spgw_state,s1ap_state,mme_nas_state are used to store gateway wide state for a particular task in the MME process
  • Mobilityd stores state with the key mobilityd:sid_to_descriptors